Binary Options Leading Indicators Versus

Unkle Mike. Culling Of Invisible Objects In Quake Related Engines

Intro
Despite all these great achievements in video cards development and the sworn assurances of developers about drawing 2 to 3 million polygons on screen without a significant FPS drop, it’s not all that rosy in reality. It depends on methods of rendering, on the number of involved textures and on the complexity and number of involved shaders. So even if all this really does ultimately lead to high performance, it only happens in the demos that developers themselves kindly offer. In these demos, some "spherical dragons in vacuum" made of a good hundred thousand polygons are drawn very quickly indeed. However, the real ingame situation for some reason never looks like this funny dragon from a demo, and as a result many comrades abandon the development of their "Crysis killer" as soon as they can render a single room with a couple of light sources, because for some reason FPS in this room fluctuate around 40-60 even on their 8800GTS and upon creating second room it drops to a whopping 20. Of course with problems like this, it would be incorrect to say how things aren’t that bad and how the trouble of such developers are purely in their absence of correctly implemented culling, and how it is time for them to read this article. But for those who have already overcome “the first room syndrome" and tried to draw – inferior though, but, anyway - the world, this problem really is relevant.
However, it should be borne in mind that QUAKE, written in ancient times, was designed for levels of a “corridor" kind exclusively; therefore methods of clipping discussed in this article are not applicable to landscapes, such as ones from STALKER or Crysis, since completely different methods work there, whose analysis is beyond the scope of this article. Meanwhile we’ll talk about the classic corridor approach to mapping and the effective clipping of invisible surfaces, as well as clipping of entire objects.

The paper tree of baloon leaves

As you probably know, QUAKE uses BSP, Binary Spacing Partition tree. This is a space indexing algorithm, and BSP itself doesn’t care if the space is open or closed, it doesn’t even care if the map is sealed, it can be anything. BSP implies the division of a three-dimensional object into a certain number of secant planes called "the branches" or "the nodes" and volumetric areas or rooms called "the leaves". The names are confusing as you can see. In QUAKE / QUAKE2 the branches usually contain information about the surfaces that this branch contain, and the leaves are an empty space, not filled with nothing. Although sometimes leaves may contain water for example (in a form of a variable that indicates, specifically, that we’ve got water in this leaf). Also, the leaf contains a pointer to the data of potential visibility (Potentially Visible Set, PVS) and a list of all surfaces that are marked as being visible from this leaf. Actually the approach itself implies that we are able to draw our world however we prefer, either using leaves only or using branches only. This is especially noticeable in different versions of QUAKE: for example, in QUAKE1 in a leaf we just mark our surfaces as visible and then we also sequentially go through all the surfaces visible from a particular branch, assembling chains of surfaces to draw them later. But in QUAKE3, we can accumulate visible surfaces no sooner than we’ll get into the leaf itself.
In QUAKE and QUAKE2, all surfaces must lie on the node, which is why the BSP tree grows rather quickly, but in exchange this makes it possible to trace these surfaces by simply moving around the tree, not wasting time to check each surface separately, which affects the speed of the tracer positively. Because of this, unique surface is linked to each node (the original surface is divided into several if necessary) so in the nodes we always have what is known to be visible beforehand, and therefore we can perform a recursive search on the tree using the BBox pyramid of frustum as a direction of our movement along the BSP tree (SV_RecursiveWorldNode function).
In QUAKE3, the tree was simplified and it tries to avoid geometry cuts as much as possible (a BSP tree is not even obliged to cut geometry, such cuts are but a matter of optimality of such a tree). And surfaces in QUAKE3 do not lie on the node because patches and triangle models lie there instead. But what happens would they be put on the node nevertheless, you can see on the example of "The Edge Of Forever" map that I compiled recently for an experimental version of Xash. Turns out, in places that had a couple thousand visible nodes and leaves in the original, there are almost 170 thousand of them with a new tree. And this is the result after all the preliminary optimizations, otherwise it could have been even more, he-he. Yeah, so... For this reason, the tree in QUAKE3 does not put anything on the node and we certainly do need to get into the leaf, mark visible surfaces in it and add them to the rendering list. On the contrary, in QUAKE / QUAKE2 going deep down to the leaf itself is not necessary.
Invisible polygon cutoff (we are talking about world polys, separate game objects will be discussed a bit later) is based on two methods:
The first method is to use bit-vectors of visibility (so-called PVS - Potential Visible Set). The second method is regular frustum culling which actually got nothing to do with BSP but works just as efficiently, for a certain number of conditions of course. Bottom line: together these two methods provide almost perfect clipping of invisible polygons, drawing a very small visible piece out of the vast world. Let's take a closer look at PVS and how it works.

When FIDO users get drunk

Underlying idea of PVS is to expose the fact that one leaf is visible from another. For BSP alone it’s basically impossible because leaves from completely different branches can be visible at the same time and you will never find a way to identify the pattern for leafs from different branches seeing each other - it simply doesn’t exist. Therefore, the compiler has to puff for us, manually checking the visibility of all leaves from all leaves. Information about visibility in this case is scanty: one Boolean variable with possible values 0 and 1. 0 means that leaf is not visible and 1 means that leaf is visible. It is easy to guess that for each leaf there is a unique set of such Boolean variables the size of the total number of leaves on the map. So a set like this but for all the leaves will take an order of magnitude more space: the number of leaves multiplied by the number of leaves and multiplied by the size of our variable in which we will store information of visibility (0 \ 1).
And the number of leaves, as you can easily guess, is determined by map size map and by the compiler, which upon reaching a certain map size, cease to divide the world into leaves and treat resulting node as a leaf. Leaf size vary for different QUAKE. For example, in QUAKE1 leaves are very small. For example I can tell you that the compiler divide standard boxmap in QUAKE1 into as many as four leaves meanwhile in QUAKE3 similar boxmap takes only one leaf. But we digress.
Let's estimate the size of our future PVS file. Suppose we have an average map and it has a couple thousand leaves. Would we imagine that the information about the leaf visibility is stored in a variable of char type (1 byte) then the size of visdata for this level would be, no more no less, almost 4 megabytes. That is, much AF. Of course an average modern developer would shrug and pack the final result into zip archive but back in 1995 end users had modest machines, their memory was low and therefore visdata was packed in “more different” ways. The first step in optimizing is about storing data not in bytes, but in bits. It is easy to guess that such approach reduce final result as much as 8 times and what's typical AF – does it without any resource-intensive algorithms like Huffman trees. Although in exchange, such approach somewhat worsened code usability and readability. Why am I writing this? Due to many developers’ lack of understanding for conditions in code like this:
if ( pvs [ leafnum >> 3 ] & ( 1 << ( leafnum & 7 ) ) ) { } 
Actually, this condition implement simple, beautiful and elegant access to the desired bit in the array (as one can recall, addressing less than one byte is impossible and you can only work with them via bit operations)

Titans that keep the globe spinning

The visible part of the world is cut off in the same fashion: we find the current leaf where the player is located (in QUAKE this is implemented by the Mod_PointInLeaf function) then we get a pointer to visdata for the current leaf (for our convenience, it is linked directly to the leaf in the form of "compressed_vis" pointer) and then stupidly go through all the leaves and branches of the map and check them for being visible from our leaf (this can be seen in the R_MarkLeaves function). As long as some leaves turn out to be visible from the current leaf we assign them a unique number from "r_visframecount" sequence which increases by one every frame. Thus, we emphasize that this leaf is visible when we build the current frame. In the next frame, "r_framecount" is incremented by one and all the leaves are considered invisible again. As one can understand, this is much more convenient and much faster than revisiting all the leaves at the end of each frame and zeroing their "visible" variable. I drew attention to this feature because this mechanism also bothers some and they don’t understand how it works.
The R_RecursiveWorldNode function “walk” along leaves and branches marked this way. It cuts off obviously invisible leaves and accumulate a list of surfaces from visible ones. Of course the first check is done for the equivalence of r_visframecount and visframe for the node in question. Then the branch undergoes frustum pyramid check and if this check fails then we don’t climb further along this branch. Having stumbled upon a leaf, we mark all its surfaces visible the same way, assigning the current r_framecount value to the visframe variable (in the future this will help us to determine quickly whether a certain surface is visible in the current frame). Then, using a simple function, we determine which side we are from the plane of our branch (each branch has its own plane, literally called “plane” in the code) and, again, for now, we just take all surfaces linked to this branch and add them to the drawing chain (so-called “texturechain”), although nobody can actually stop us from drawing them immediately, right there, (in QUAKE1 source code one can see both options) having previously checked these surfaces for clipping with the frustum pyramid, or at least having made sure that the surface faces us.
In QUAKE, each surface has a special flag SURF_PLANEBACK which help us determine the orientation of the surface. But in QUAKE3 there is no such flag anymore, and clipping of invisible surfaces is not as efficient, sending twice as many surfaces for rendering. However, their total number after performing all the checks is not that great. However, whatever one may say, adding this check to Xash3D raised average FPS almost one and half times in comparison to original Half-Life. This is on the matter whether it is beneficial. But we digress.
So after chaining and drawing visible surfaces, we call R_RecursiveWorldNode again but now - for the second of two root branches of BSP tree. Just in case. Because the visible surfaces, too, may well be there. When the recursion ends, the result will either be a whole rendered world, or chains of visible surfaces at least. This is what can actually be sent for rendering with OpenGL or Direct3D, well, if we did not draw our world right in the R_RecursiveWorldNode function of course. Actually this method with minor upgrades successfully used in all three QUAKEs.

A naked man is in a wardrobe because he's waiting for a tram

One of the upgrades is utilization of the so-called areaportals. This is another optimization method coming straight out of QUAKE2. The point of using areaportals is about game logic being able to turn the visibility of an entire sectors on and off at its discretion. Technically, this is achieved as follows: the world is divided into zones similar to the usual partitioning along the BSP tree, however, there can’t be more than 256 of them (later I will explain why) and they are not connected in any way.
Regular visibility is determined just like in QUAKE; however, by installing a special “func_areaportal” entity we can force the compiler to split an area in two. This mechanism operates on approximately the same principle as the algorithm of searching for holes in the map, so you won’t deceive the compiler by putting func_areaportal in a bare field - the compiler will simply ignore it. Although if you make areaportal the size of the cross-section of this field (to the skybox in all directions) in spite of everything the zones will be divided. We can observe this technique in Half-Life 2 where an attempt to return to old places (with cheats for example) shows us disconnected areaportals and a brief transition through the void from one zone to another. Actually, this mechanism helped Half-Life 2 simulate large spaces successfully and still use BSP level structure (I have already said that BSP, its visibility check algorithm to be precise, is not very suitable for open spaces).
So installed areaportal forcibly breaks one zone into two, and the rest of the zoneization is at the discretion of the compiler, which at the same time makes sure not to exceed 256 zones limit, so their sizes can be completely different. Well, I repeat, it depends on the overall size of the map. Our areaportal is connected to some door dividing these two zones. When the door is closed - it turns areaportal off and the zones are separated from each other. Therefore, if the player is not in the cut off zone, then rendering it is not worth it. In QUAKE, we’d have to do a bunch of checks and it’s possible that we could only cut off a fraction of the number of polygons (after all, the door itself is not an obstacle for either visibility check, or even more so, nor it is for frustum). Compare to case in point: one command is issued - and the whole room is excluded from visibility. “Not bad,” you’d say, “but how would the renderer find out? After all, we performed all our operations on the server and the client does not know anything about it.” And here we go back to the question why there can’t be more than 256 zones.
The point is, information about all of zone visibility is, likewise, packaged in bit flags (like PVS) and transmitted to the client in a network message. Dividing 256 bits by 8 makes 32 bytes, which generally isn’t that much. In addition, the tail of this information can be cut off at ease if it contains zeroes only. Though the payback for such an optimization would appear as an extra byte that will have to be transmitted over the network to indicate the actual size of the message about the visibility of our zones. But, in general, this approach justified.

Light_environment traces enter from the back

Source Engine turned out to have a terrible bug which makes the whole areaportal thing nearly meaningless. Numerous problems arise because of it: water breaks down into segments that pop in, well, you should be familiar with all this by now. Areaportal cuts the geometry unpredictably, like an ordinary secant plane, but its whole point is being predictable! Whereas areaportal brushes in Source Engine have absolutely no priority in splitting the map. It should be like this: first, the tree is cut the regular way. And when no suitable planes left, the final secant plane of areaportal is used. This is the only way to cut the sectors correctly.

Modern problems

The second optimization method, as I said, is increased size of the final leaf akin to QUAKE3. It is believed that a video card would draw a certain amount of polygons much faster than the CPU would check whether they are visible. This come from the very concept of visibility check: if visibility check takes longer than direct rendering, then well, to hell with this check. The controversy of this approach is determined by a wide range of video cards present at the hands of the end users, and it is strongly determined by the surging fashion for laptops and netbooks in which a video card is a very conditional and very weak concept (don’t even consider its claimed Shader Model 3 support). Therefore, for desktop gaming machines it would be more efficient to draw more at a time, but for weak video cards of laptops traditional culling will remain more reliable. Even if it is such a simple culling as I described earlier.

Decompression sickness simulator

Although I should also mention the principles of frustum culling, perhaps they are incomprehensible to some. Cutoff by frustum pyramid is actually pure mathematics without any compiler calculations. From the current direction of the player’s gaze, a clipping pyramid is built (the tip of the pyramid – in case someone can’t understand - is oriented towards the player’s point of view and its base is oriented in the direction of player’s view). The angle between the walls of the pyramid can be sharp or blunt - as you probably guessed already, it depends on the player's FOV. In addition, the player can forcefully pull the far wall of the pyramid closer to himself (yes, this is the notorious “MaxRange” parameter in the “worldspawn” menu of the map editor). Of course, OpenGL also builds a similar pyramid for its internal needs when it takes information from the projection matrix but we’re talking local pyramid now. The finished pyramid consists of 4-6 planes (QUAKE uses only 4 planes and trusts OpenGL to independently cut far and near polygons, but if you write your own renderer and intend to support mirrors and portals you will definitely need all six planes). Well, the frustum test itself is an elementary check for a presence of AA-box (AABB, Axis Aligned Bounding Box) in the frustum pyramid. Or speaking more correctly, this is a check for their intersection. Let me remind you that each branch has its own dimensions (a fragment of secant plane bound by neighboring perpendicular secant planes) which are checked for intersection. But unfortunately the frustum test has one fundamental drawback - it cannot cut what is directly in the player’s view. We can adjust the cutoff distance, we can even make that “ear feint” like they do in QFusion where final zFar value is calculated in each frame before rendering and then taken into account in entity clipping, but after all, whatever they say, the value itself was obtained from PVS-information. Therefore, neither of two methods can replace the other but they just complement each other. This should be remembered.

I gotta lay off the pills I'm taking

It seems that we figured out the rendering of the world and now we are moving on smoothly to cutting off moving objects... which are all the visible objects in the world! Even ones that, at te first glance, stand still and aren’t planning to move anywhere. Cause the player moves! From one point he still sees a certain static object, and from another point, of course, he no longer does. This detail should also be considered.
Actually, at the beginning of this article I already spoke in detail about an algorithm of objects’ visibility check: first we find the visible leaf for the player, then we find the visible leaf for the entity and then we check by visdata whether they see each other. I, too, would like to clarify (if someone suddenly does not understand) how each moving entity is given the number of its current visible leaf, i.e. directly for entity’s its own current position, and the leaves themselves are of course static and always in the same place.

Ostrich is such an OP problem solver

So the method described above has two potential problems:
The first problem is that even if A equals B, then, oddly enough, B is far from being always equal A. In other words, entity A can see entity B, but this does not mean that entity B see entity A, and, no, it’s not about one of them “looking” away. So why is this happening? Most often for two reasons:
The first reason is that one of the entities’ ORIGIN sit tight inside the wall and the Mod_PointInLeaf function for it points to the outer “zero” leaf from which EVERYTHING is visible (haven’t any of you ever flown around the map?). Meanwhile, no leaf inside the map can see outer leaf - these two features actually explain an interesting fact of an entire world geometry becoming visible and on the contrary, all objects disappearing when you fly outside the map. In regular mode, similar problems can occur for objects attached to the wall or recessed into the wall. For example, sometimes the sounds of a pressed button or opening door disappear because its current position went beyond the world borders. This phenomenon is fought by interchanging objects A and B or by obtaining alternative points for the position of an object, but all the same, it’s all not very reliable.

But lawyer said that you don't exist

In addition, as I said, there is another problem. It come from the fact that not every entity fits a single leaf. Only the player is so small that he can always be found in one leaf only (well, in the most extreme case - in two leaves on the border of water and air. This phenomenon is fought with various hacks btw), but some giant hentacle or on the contrary, an elevator made as a door entity, can easily occupy 30-40 leaves at a time. An attempt to check one leaf (for example, one where the center of the model is) will inevitably lead to a deplorable result: as soon as the center of an object will be out of the player’s visibility range, the entire object will disappear completely. The most common case is the notorious func_door used as an elevator. There is one in QUAKE on the E1M1. Observe: it travels halfway and then its ORIGIN is outside the map and therefore it must disappear from the player’s field of view. However, it does not go anywhere, right? Let us see in greater detail how this is done.
The simplest idea that comes to one’s mind: since the object occupies several leaves, we have to save them all somewhere in the structure of an object in the code and check them one by one. If at least one of these leaves is visible, then the whole object is visible (for example, it’s very tip). This is exactly what was implemented in QUAKE: a static array for 16 leaves and a simple recursive function SV_FindTouchedLeafs that looks for all the leaves in range hardcoded in "pev->absmins" and "pev->absmax" variables (pev i.e. a Pointer to EntVars_t table). absmins and absmax are recalculated each time SV_LinkEdict (or its more specific case of UTIL_SetOrigin) is called. Hence the quite logical conclusion that a simple change of ORIGIN without recalculating its visible leaf will take the object out of visibility sooner or later even if, surprisingly enough, it’s right in front of the player and the player should technically still be able to see it. Inb4 why one have to call UTIL_SetOrigin and wouldn’t it be easier to just assign new value to the "pev->origin" vector without calling this function. It wouldn’t.
With this method we can solve both former problems perfectly: we can fight the loss of visibility if the object's ORIGIN went beyond the world borders and level the difference of visibility for A->B versus visibility for B->A.

A secret life of monster_tripmine

Actually we’ve yet to encounter another problem, but it does not occur immediately. Remember, we’ve got an array of 16 leaves. But what if it won’t be enough? Thank God there are no beams in QUAKE and no very long elevators made as func_door either. For this exact reason. Because when the array is filled to capacity, the SV_FindTouchedLeafs function just stop and we can only hope that there won’t be that many cases when an object disappear right before our eyes. But in the original QUAKE, such cases may well be. In Half-Life, the situation is even worse - as you can remember there are rays that can reach for half the map, tripmine rays for example. In this case, a situation may occur when we see just the very tip of the ray. For most of these rays, 16 leaves are clearly not enough. Valve tried to remedy the situation by increasing the array to 48 leaves. That helped. On early maps. If you remember, at the very beginning of the game when the player has already got off the trailer, he enters that epic elevator that takes him down. The elevator is made as a door entity and it occupies 48 leaves exactly. Apparently, the final expansion of the array was based after its dimensions. Then the programmers realized that this isn’t really a solution, because no matter how much one would expand the array, it can still be lacking for something. So then they screwed up an alternative method for visibility check: a head branch (headnode) check. In short, this is still the same SV_FindTouchedLeafs but now it is called directly from the place of visibility check and with a subsequent transfer of visdata in there. In general, it is not used very often because it is slower than checking pre-accumulated leaves, that is, it is intended just for such non-standard cases like this one.
Well, and since, I hope, general picture of the clipping mechanism already beginning to take shape in your mind, I will finish the article in just a few words.
On the server, all objects that have already passed the visibility check are added to the network message containing information about visible objects. Thus, on the client, the list of visible entities is already cut off by PVS and we do not have to do this again and therefore a simple frustum check is enough. You ask, "why did we have to cut off invisible objects on the server when we could do this later when we are on the client already?" I reply: yes, we could, but now the objects cut off on the server didn’t get into the network message and saved us some traffic. And since the player still does not see them, what is the point of transferring them to the client just to check them for visibility after? This is a kind of double optimizing :)
© Uncle Mike 2012
submitted by crystallize1 to HalfLife [link] [comments]

Invisible Object Culling In Quake Related Engines (REVISED)

Prologue
Despite all these great achievements in video cards development and the sworn assurances of developers about drawing 2 to 3 million polygons on screen without a significant FPS drop, it’s not all that rosy in reality. It depends on methods of rendering, on the number of involved textures and on the complexity and number of involved shaders. So even if all this really does ultimately lead to high performance, it only happens in the demos that developerss themselves kindly offer. In these demos, some "spherical dragons in vacuum" made of a good hundred thousand polygons are drawn very quickly indeed. However, the real ingame situation for some reason never looks like this funny dragon from a demo, and as a result many comrades abandon the development of their "Crysis killer" as soon as they can render a single room with a couple of light sources, because for some reason FPS in this room fluctuate around 40-60 even on their 8800GTS and upon creating second room it drops to a whopping 20. Of course with problems like this, it would be incorrect to say how things aren’t that bad and how the trouble of such developers are purely in their absence of correctly implemented culling, and how it is time for them to read this article. But for those who have already overcome “the first room syndrome" and tried to draw – inferior though, but, anyway - the world, this problem really is relevant.
However, it should be borne in mind that QUAKE, written in ancient times, was designed for levels of a “corridor" kind exclusively; therefore methods of clipping discussed in this article are not applicable to landscapes, such as ones from STALKER or Crysis, since completely different methods work there, whose analysis is beyond the scope of this article. Meanwhile we’ll talk about the classic corridor approach to mapping and the effective clipping of invisible surfaces, as well as clipping of entire objects.

The paper tree of baloon leaves

As you probably know, QUAKE uses BSP, Binary Spacing Partition tree. This is a space indexing algorithm, and BSP itself doesn’t care if the space is open or closed, it doesn’t even care if the map is sealed, it can be anything. BSP implies the division of a three-dimensional object into a certain number of secant planes called "the branches" or "the nodes" and volumetric areas or rooms called "the leaves". The names are confusing as you can see. In QUAKE / QUAKE2 the branches usually contain information about the surfaces that this branch contain, and the leaves are an empty space, not filled with nothing. Although sometimes leaves may contain water for example (in a form of a variable that indicates, specifically, that we’ve got water in this leaf). Also, the leaf contains a pointer to the data of potential visibility (Potentially Visible Set, PVS) and a list of all surfaces that are marked as being visible from this leaf. Actually the approach itself implies that we are able to draw our world however we prefer, either using leaves only or using branches only. This is especially noticeable in different versions of QUAKE: for example, in QUAKE1 in a leaf we just mark our surfaces as visible and then we also sequentially go through all the surfaces visible from a particular branch, assembling chains of surfaces to draw them later. But in QUAKE3, we can accumulate visible surfaces no sooner than we’ll get into the leaf itself.
In QUAKE and QUAKE2, all surfaces must lie on the node, which is why the BSP tree grows rather quickly, but in exchange this makes it possible to trace these surfaces by simply moving around the tree, not wasting time to check each surface separately, which affects the speed of the tracer positively. Because of this, unique surface is linked to each node (the original surface is divided into several if necessary) so in the nodes we always have what is known to be visible beforehand, and therefore we can perform a recursive search on the tree using the BBox pyramid of frustum as a direction of our movement along the BSP tree (SV_RecursiveWorldNode function).
In QUAKE3, the tree was simplified and it tries to avoid geometry cuts as much as possible (a BSP tree is not even obliged to cut geometry, such cuts are but a matter of optimality of such a tree). And surfaces in QUAKE3 do not lie on the node because patches and triangle models lie there instead. But what happens would they be put on the node nevertheless, you can see on the example of "The Edge Of Forever" map that I compiled recently for an experimental version of Xash. Turns out, in places that had a couple thousand visible nodes and leaves in the original, there are almost 170 thousand of them with a new tree. And this is the result after all the preliminary optimizations, otherwise it could have been even more, he-he. Yeah, so... For this reason, the tree in QUAKE3 does not put anything on the node and we certainly do need to get into the leaf, mark visible surfaces in it and add them to the rendering list. On the contrary, in QUAKE / QUAKE2 going deep down to the leaf itself is not necessary.
Invisible polygon cutoff (we are talking about world polys, separate game objects will be discussed a bit later) is based on two methods:
The first method is to use bit-vectors of visibility (so-called PVS - Potential Visible Set). The second method is regular frustum culling which actually got nothing to do with BSP but works just as efficiently, for a certain number of conditions of course. Bottom line: together these two methods provide almost perfect clipping of invisible polygons, drawing a very small visible piece out of the vast world. Let's take a closer look at PVS and how it works.

When FIDO users get drunk

Underlying idea of PVS is to expose the fact that one leaf is visible from another. For BSP alone it’s basically impossible because leaves from completely different branches can be visible at the same time and you will never find a way to identify the pattern for leafs from different branches seeing each other - it simply doesn’t exist. Therefore, the compiler has to puff for us, manually checking the visibility of all leaves from all leaves. Information about visibility in this case is scanty: one Boolean variable with possible values 0 and 1. 0 means that leaf is not visible and 1 means that leaf is visible. It is easy to guess that for each leaf there is a unique set of such Boolean variables the size of the total number of leaves on the map. So a set like this but for all the leaves will take an order of magnitude more space: the number of leaves multiplied by the number of leaves and multiplied by the size of our variable in which we will store information of visibility (0 \ 1).
And the number of leaves, as you can easily guess, is determined by map size map and by the compiler, which upon reaching a certain map size, cease to divide the world into leaves and treat resulting node as a leaf. Leaf size vary for different QUAKE. For example, in QUAKE1 leaves are very small. For example I can tell you that the compiler divide standard boxmap in QUAKE1 into as many as four leaves meanwhile in QUAKE3 similar boxmap takes only one leaf. But we digress.
Let's estimate the size of our future PVS file. Suppose we have an average map and it has a couple thousand leaves. Would we imagine that the information about the leaf visibility is stored in a variable of char type (1 byte) then the size of visdata for this level would be, no more no less, almost 4 megabytes. That is, much AF. Of course an average modern developer would shrug and pack the final result into zip archive but back in 1995 end users had modest machines, their memory was low and therefore visdata was packed in “more different” ways. The first step in optimizing is about storing data not in bytes, but in bits. It is easy to guess that such approach reduce final result as much as 8 times and what's typical AF – does it without any resource-intensive algorithms like Huffman trees. Although in exchange, such approach somewhat worsened code usability and readability. Why am I writing this? Due to many developers’ lack of understanding for conditions in code like this:
if ( pvs [ leafnum >> 3 ] & ( 1 << ( leafnum & 7 ) ) ) { } 
Actually, this condition implement simple, beautiful and elegant access to the desired bit in the array (as one can recall, addressing less than one byte is impossible and you can only work with them via bit operations)

Titans that keep the globe spinning

The visible part of the world is cut off in the same fashion: we find the current leaf where the player is located (in QUAKE this is implemented by the Mod_PointInLeaf function) then we get a pointer to visdata for the current leaf (for our convenience, it is linked directly to the leaf in the form of "compressed_vis" pointer) and then stupidly go through all the leaves and branches of the map and check them for being visible from our leaf (this can be seen in the R_MarkLeaves function). As long as some leaves turn out to be visible from the current leaf we assign them a unique number from "r_visframecount" sequence which increases by one every frame. Thus, we emphasize that this leaf is visible when we build the current frame. In the next frame, "r_framecount" is incremented by one and all the leaves are considered invisible again. As one can understand, this is much more convenient and much faster than revisiting all the leaves at the end of each frame and zeroing their "visible" variable. I drew attention to this feature because this mechanism also bothers some and they don’t understand how it works.
The R_RecursiveWorldNode function “walk” along leaves and branches marked this way. It cuts off obviously invisible leaves and accumulate a list of surfaces from visible ones. Of course the first check is done for the equivalence of r_visframecount and visframe for the node in question. Then the branch undergoes frustum pyramid check and if this check fails then we don’t climb further along this branch. Having stumbled upon a leaf, we mark all its surfaces visible the same way, assigning the current r_framecount value to the visframe variable (in the future this will help us to determine quickly whether a certain surface is visible in the current frame). Then, using a simple function, we determine which side we are from the plane of our branch (each branch has its own plane, literally called “plane” in the code) and, again, for now, we just take all surfaces linked to this branch and add them to the drawing chain (so-called “texturechain”), although nobody can actually stop us from drawing them immediately, right there, (in QUAKE1 source code one can see both options) having previously checked these surfaces for clipping with the frustum pyramid, or at least having made sure that the surface faces us.
In QUAKE, each surface has a special flag SURF_PLANEBACK which help us determine the orientation of the surface. But in QUAKE3 there is no such flag anymore, and clipping of invisible surfaces is not as efficient, sending twice as many surfaces for rendering. However, their total number after performing all the checks is not that great. However, whatever one may say, adding this check to Xash3D raised average FPS almost one and half times in comparison to original Half-Life. This is on the matter whether it is beneficial. But we digress.
So after chaining and drawing visible surfaces, we call R_RecursiveWorldNode again but now - for the second of two root branches of BSP tree. Just in case. Because the visible surfaces, too, may well be there. When the recursion ends, the result will either be a whole rendered world, or chains of visible surfaces at least. This is what can actually be sent for rendering with OpenGL or Direct3D, well, if we did not draw our world right in the R_RecursiveWorldNode function of course. Actually this method with minor upgrades successfully used in all three QUAKEs.

A naked man is in a wardrobe because he's waiting for a tram

One of the upgrades is utilization of the so-called areaportals. This is another optimization method coming straight out of QUAKE2. The point of using areaportals is about game logic being able to turn the visibility of an entire sectors on and off at its discretion. Technically, this is achieved as follows: the world is divided into zones similar to the usual partitioning along the BSP tree, however, there can’t be more than 256 of them (later I will explain why) and they are not connected in any way.
Regular visibility is determined just like in QUAKE; however, by installing a special “func_areaportal” entity we can force the compiler to split an area in two. This mechanism operates on approximately the same principle as the algorithm of searching for holes in the map, so you won’t deceive the compiler by putting func_areaportal in a bare field - the compiler will simply ignore it. Although if you make areaportal the size of the cross-section of this field (to the skybox in all directions) in spite of everything the zones will be divided. We can observe this technique in Half-Life 2 where an attempt to return to old places (with cheats for example) shows us disconnected areaportals and a brief transition through the void from one zone to another. Actually, this mechanism helped Half-Life 2 simulate large spaces successfully and still use BSP level structure (I have already said that BSP, its visibility check algorithm to be precise, is not very suitable for open spaces).
So installed areaportal forcibly breaks one zone into two, and the rest of the zoneization is at the discretion of the compiler, which at the same time makes sure not to exceed 256 zones limit, so their sizes can be completely different. Well, I repeat, it depends on the overall size of the map. Our areaportal is connected to some door dividing these two zones. When the door is closed - it turns areaportal off and the zones are separated from each other. Therefore, if the player is not in the cut off zone, then rendering it is not worth it. In QUAKE, we’d have to do a bunch of checks and it’s possible that we could only cut off a fraction of the number of polygons (after all, the door itself is not an obstacle for either visibility check, or even more so, nor it is for frustum). Compare to case in point: one command is issued - and the whole room is excluded from visibility. “Not bad,” you’d say, “but how would the renderer find out? After all, we performed all our operations on the server and the client does not know anything about it.” And here we go back to the question why there can’t be more than 256 zones.
The point is, information about all of zone visibility is, likewise, packaged in bit flags (like PVS) and transmitted to the client in a network message. Dividing 256 bits by 8 makes 32 bytes, which generally isn’t that much. In addition, the tail of this information can be cut off at ease if it contains zeroes only. Though the payback for such an optimization would appear as an extra byte that will have to be transmitted over the network to indicate the actual size of the message about the visibility of our zones. But, in general, this approach justified.

Light_environment traces enter from the back

Source Engine turned out to have a terrible bug which makes the whole areaportal thing nearly meaningless. Numerous problems arise because of it: water breaks down into segments that pop in, well, you should be familiar with all this by now. Areaportal cuts the geometry unpredictably, like an ordinary secant plane, but its whole point is being predictable! Whereas areaportal brushes in Source Engine have absolutely no priority in splitting the map. It should be like this: first, the tree is cut the regular way. And when no suitable planes left, the final secant plane of areaportal is used. This is the only way to cut the sectors correctly.

Modern problems

The second optimization method, as I said, is increased size of the final leaf akin to QUAKE3. It is believed that a video card would draw a certain amount of polygons much faster than the CPU would check whether they are visible. This come from the very concept of visibility check: if visibility check takes longer than direct rendering, then well, to hell with this check. The controversy of this approach is determined by a wide range of video cards present at the hands of the end users, and it is strongly determined by the surging fashion for laptops and netbooks in which a video card is a very conditional and very weak concept (don’t even consider its claimed Shader Model 3 support). Therefore, for desktop gaming machines it would be more efficient to draw more at a time, but for weak video cards of laptops traditional culling will remain more reliable. Even if it is such a simple culling as I described earlier.

Decompression sickness simulator

Although I should also mention the principles of frustum culling, perhaps they are incomprehensible to some. Cutoff by frustum pyramid is actually pure mathematics without any compiler calculations. From the current direction of the player’s gaze, a clipping pyramid is built (the tip of the pyramid – in case someone can’t understand - is oriented towards the player’s point of view and its base is oriented in the direction of player’s view). The angle between the walls of the pyramid can be sharp or blunt - as you probably guessed already, it depends on the player's FOV. In addition, the player can forcefully pull the far wall of the pyramid closer to himself (yes, this is the notorious “MaxRange” parameter in the “worldspawn” menu of the map editor). Of course, OpenGL also builds a similar pyramid for its internal needs when it takes information from the projection matrix but we’re talking local pyramid now. The finished pyramid consists of 4-6 planes (QUAKE uses only 4 planes and trusts OpenGL to independently cut far and near polygons, but if you write your own renderer and intend to support mirrors and portals you will definitely need all six planes). Well, the frustum test itself is an elementary check for a presence of AA-box (AABB, Axis Aligned Bounding Box) in the frustum pyramid. Or speaking more correctly, this is a check for their intersection. Let me remind you that each branch has its own dimensions (a fragment of secant plane bound by neighboring perpendicular secant planes) which are checked for intersection. But unfortunately the frustum test has one fundamental drawback - it cannot cut what is directly in the player’s view. We can adjust the cutoff distance, we can even make that “ear feint” like they do in QFusion where final zFar value is calculated in each frame before rendering and then taken into account in entity clipping, but after all, whatever they say, the value itself was obtained from PVS-information. Therefore, neither of two methods can replace the other but they just complement each other. This should be remembered.

I gotta lay off the pills I'm taking

It seems that we figured out the rendering of the world and now we are moving on smoothly to cutting off moving objects... which are all the visible objects in the world! Even ones that, at te first glance, stand still and aren’t planning to move anywhere. Cause the player moves! From one point he still sees a certain static object, and from another point, of course, he no longer does. This detail should also be considered.
Actually, at the beginning of this article I already spoke in detail about an algorithm of objects’ visibility check: first we find the visible leaf for the player, then we find the visible leaf for the entity and then we check by visdata whether they see each other. I, too, would like to clarify (if someone suddenly does not understand) how each moving entity is given the number of its current visible leaf, i.e. directly for entity’s its own current position, and the leaves themselves are of course static and always in the same place.

Ostrich is such an OP problem solver

So the method described above has two potential problems:
The first problem is that even if A equals B, then, oddly enough, B is far from being always equal A. In other words, entity A can see entity B, but this does not mean that entity B see entity A, and, no, it’s not about one of them “looking” away. So why is this happening? Most often for two reasons:
The first reason is that one of the entities’ ORIGIN sit tight inside the wall and the Mod_PointInLeaf function for it points to the outer “zero” leaf from which EVERYTHING is visible (haven’t any of you ever flown around the map?). Meanwhile, no leaf inside the map can see outer leaf - these two features actually explain an interesting fact of an entire world geometry becoming visible and on the contrary, all objects disappearing when you fly outside the map. In regular mode, similar problems can occur for objects attached to the wall or recessed into the wall. For example, sometimes the sounds of a pressed button or opening door disappear because its current position went beyond the world borders. This phenomenon is fought by interchanging objects A and B or by obtaining alternative points for the position of an object, but all the same, it’s all not very reliable.

But lawyer said that you don't exist

In addition, as I said, there is another problem. It come from the fact that not every entity fits a single leaf. Only the player is so small that he can always be found in one leaf only (well, in the most extreme case - in two leaves on the border of water and air. This phenomenon is fought with various hacks btw), but some giant hentacle or on the contrary, an elevator made as a door entity, can easily occupy 30-40 leaves at a time. An attempt to check one leaf (for example, one where the center of the model is) will inevitably lead to a deplorable result: as soon as the center of an object will be out of the player’s visibility range, the entire object will disappear completely. The most common case is the notorious func_door used as an elevator. There is one in QUAKE on the E1M1. Observe: it travels halfway and then its ORIGIN is outside the map and therefore it must disappear from the player’s field of view. However, it does not go anywhere, right? Let us see in greater detail how this is done.
The simplest idea that comes to one’s mind: since the object occupies several leaves, we have to save them all somewhere in the structure of an object in the code and check them one by one. If at least one of these leaves is visible, then the whole object is visible (for example, it’s very tip). This is exactly what was implemented in QUAKE: a static array for 16 leaves and a simple recursive function SV_FindTouchedLeafs that looks for all the leaves in range hardcoded in "pev->absmins" and "pev->absmax" variables (pev i.e. a Pointer to EntVars_t table). absmins and absmax are recalculated each time SV_LinkEdict (or its more specific case of UTIL_SetOrigin) is called. Hence the quite logical conclusion that a simple change of ORIGIN without recalculating its visible leaf will take the object out of visibility sooner or later even if, surprisingly enough, it’s right in front of the player and the player should technically still be able to see it. Inb4 why one have to call UTIL_SetOrigin and wouldn’t it be easier to just assign new value to the "pev->origin" vector without calling this function. It wouldn’t.
With this method we can solve both former problems perfectly: we can fight the loss of visibility if the object's ORIGIN went beyond the world borders and level the difference of visibility for A->B versus visibility for B->A.

A secret life of monster_tripmine

Actually we’ve yet to encounter another problem, but it does not occur immediately. Remember, we’ve got an array of 16 leaves. But what if it won’t be enough? Thank God there are no beams in QUAKE and no very long elevators made as func_door either. For this exact reason. Because when the array is filled to capacity, the SV_FindTouchedLeafs function just stop and we can only hope that there won’t be that many cases when an object disappear right before our eyes. But in the original QUAKE, such cases may well be. In Half-Life, the situation is even worse - as you can remember there are rays that can reach for half the map, tripmine rays for example. In this case, a situation may occur when we see just the very tip of the ray. For most of these rays, 16 leaves are clearly not enough. Valve tried to remedy the situation by increasing the array to 48 leaves. That helped. On early maps. If you remember, at the very beginning of the game when the player has already got off the trailer, he enters that epic elevator that takes him down. The elevator is made as a door entity and it occupies 48 leaves exactly. Apparently, the final expansion of the array was based after its dimensions. Then the programmers realized that this isn’t really a solution, because no matter how much one would expand the array, it can still be lacking for something. So then they screwed up an alternative method for visibility check: a head branch (headnode) check. In short, this is still the same SV_FindTouchedLeafs but now it is called directly from the place of visibility check and with a subsequent transfer of visdata in there. In general, it is not used very often because it is slower than checking pre-accumulated leaves, that is, it is intended just for such non-standard cases like this one.
Well, and since, I hope, general picture of the clipping mechanism already beginning to take shape in your mind, I will finish the article in just a few words.
On the server, all objects that have already passed the visibility check are added to the network message containing information about visible objects. Thus, on the client, the list of visible entities is already cut off by PVS and we do not have to do this again and therefore a simple frustum check is enough. You ask, "why did we have to cut off invisible objects on the server when we could do this later when we are on the client already?" I reply: yes, we could, but now the objects cut off on the server didn’t get into the network message and saved us some traffic. And since the player still does not see them, what is the point of transferring them to the client just to check them for visibility after? This is a kind of double optimizing :)
© Uncle Mike 2012
submitted by crystallize1 to hammer [link] [comments]

Addressing Canada’s Employment Insurance Gap For Self-Employed Workers

Source: TD
Ksenia Bushmeneva, Economist
Dated July 15th, 2020

Highlights


Chart 1 - Workers in More Precarious Employment See Steep Job Losses

Chart 2 - COVID-19 Self-employed to Cut Hours Worked Drastically

EI Leaves Many Non-Standard Workers Behind


Chart 3 - Self-employed Workers Much More Likely to Apply for CERB

Chart 4 - Prevalence of Self-employment Varies by Province

What Complicates Offering EI Coverage For Non-Standard Workers


Chart 5 - Maternity and Family Benefits Available to Self-employment

Chart 6 - Sickness, Disability, and Work Injury Coverage Available to Self-Employed

Some Solutions Based on The International Experience


Chart 7 - Unemployment Benefits Coverage Options to Self-employed

Chart 8 - Old-age Pensions Coverage Options Available to Self-employed

Concluding Remarks


References

  1. “Employment Insurance Coverage Survey, 2018”. Statistics Canada. https://www150.statcan.gc.ca/n1/daily-quotidien/191114/dq191114a-eng.htm
  2. Sunil Johal & Erich Hartmann. “Facilitating the Future of Work Through Modernizing EI System”. The Mowat Center. https://ppforum.ca/wp-content/uploads/2019/05/PPF-Modernizing-EI-for-Future-of-Work-April-2019-EN.pdf
  3. Antonia Asenjo and Clemente Pignatti. “Unemployment insurance schemes around the world: Evidence and policy options.” International Labour Office. https://www.ilo.org/wcmsp5/groups/public/---dgreports/---inst/documents/publication/wcms_723778.pdf
  4. Sung-Hee Jeon and Yuri Ostrovsky. “The impact of COVID-19 on the gig economy: Short- and long-term concerns”. Statistics Canada. https://www150.statcan.gc.ca/n1/en/pub/45-28-0001/2020001/article/00021-eng.pdf?st=x8kZDLV7
  5. Sunil Johal & Erich Hartmann. “Facilitating the Future of Work Through Modernizing EI System”. The Mowat Center. https://ppforum.ca/wp-content/uploads/2019/05/PPF-Modernizing-EI-for-Future-of-Work-April-2019-EN.pdf Ibid.
  6. “Evaluation of the Employment Insurance Special Benefits for Self-employed Workers”. Employment and Social Development Canada. https://www.canada.ca/en/employment-social-development/corporate/reports/evaluations/2016-ei-special-benefits.html
  7. “The Future of Social Protection: what works for non-standard workers?” OECD. https://www.oecd-ilibrary.org/sites/9789264306943-en/1/2/1/index.html?itemId=/content/publication/9789264306943-en&_csp_=60072f6c81e5afb306d1ad580d284396&itemIGO=oecd&itemContentType=book#chapter-d1e549 Ibid.
  8. “Key Small Business Statistics - January 2019”. Statistics Canada. https://www.ic.gc.ca/eic/site/061.nsf/eng/h_03090.html#point1-3 Ibid.
  9. “Government Response To The Fifth Report Of The Standing Committee on The Status of Women. Interim Report on the Maternity and Parental Benefits Under Employment Insurance: the Exclusion of Self-Employed Workers.” https://www.ourcommons.ca/DocumentVieween/39-1/FEWO/report-5/response-8512-391-19
  10. “Evaluation of the Employment Insurance Special Benefits for Self-employed Workers”. Employment and Social Development Canada. https://www.canada.ca/en/employment-social development/corporate/reports/evaluations/2016-ei-special-benefits.html

End Notes

  1. Since 2010 self-employed workers can voluntarily participate in EI Special Benefit for Self-Employed Workers (SBSE) to gain access to many life event-type benefits accessible to regular employees, such as maternity and paternity leave programs, leave due to sickness or to care for an sick family member. In addition to this, current EI system allows certain exceptions for some non-standard workers. For example some individuals who work independently as barbers, hairdressers, taxi drivers, drivers of other passenger vehicles are eligible to receive benefits through the regular EI program. Fishermen are also included as insured persons under the EI Fishing Regulations. In the case of the self- employed fishermen, EI qualification is tied to income. In order to qualify for up to 26 weeks of benefit, they need to have earned between $2,500 to $4,200 in the last 31 weeks.
  2. The two main reasons for not contributing to the EI program were not having worked in the previous 12 months, and non-insurable employment (which includes self-employment).
submitted by AwesomeMathUse to econmonitor [link] [comments]

Differences between LISP 1.5 and Common Lisp, Part 1:

[Edit: I didn't mean to put a colon in the title.]
In this post we'll be looking at some of the things that make LISP 1.5 and Common Lisp different. There isn't too much surviving LISP 1.5 code, but some of the code that is still around is interesting and worthy of study.
Here are some conventions used in this post of which you might take notice:
Sources are linked sometimes below, but here is a list of links that were helpful while writing this:
The differences between LISP 1.5 and Common Lisp can be classified into the following groups:
  1. Superficial differences—matters of syntax
  2. Conventional differences—matters of code style and form
  3. Fundamental differences—matters of semantics
  4. Library differences—matters of available functions
This post will go through the first three of these groups in that order. A future post will discuss library differences, except for some functions dealing with character-based input and output, since they are a little world unto their own.
[Originally the library differences were part of this post, but it exceeded the length limit on posts (40000 characters)].

Superficial differences.

LISP 1.5 was used initially on computers that had very limited character sets. The machine on which it ran at MIT, the IBM 7090, used a six-bit, binary-coded decimal encoding for characters, which could theoretically represent up to sixty-four characters. In practice, only fourty-six were widely used. The repertoire of this character set consisted of the twenty-six uppercase letters, the nine digits, the blank character '', and the ten special characters '-', '/', '=', '.', '$', ',', '(', ')', '*', and '+'. You might note the absence of the apostrophe/single quote—there was no shorthand for the quote operator in LISP 1.5 because no sensical character was available.
When the LISP 1.5 system read input from cards, it treated the end of a card not like a blank character (as is done in C, TeX, etc.), but as nothing. Therefore the first character of a symbol's name could be the last character of a card, the remaining characters appearing at the beginning of the next card. Lisp's syntax allowed for the omission of almost all whitespace besides that which was used as delimiters to separate tokens.
List syntax. Lists were contained within parentheses, as is the case in Common Lisp. From the beginning Lisp had the consing dot, which was written as a period in LISP 1.5; the interaction between the period when used as the consing dot and the period when used as the decimal point will be described shortly.
In LISP 1.5, the comma was equivalent to a blank character; both could be used to delimit items within a list. The LISP I Programmer's Manual, p. 24, tells us that
The commas in writing S-expressions may be omitted. This is an accident.
Number syntax. Numbers took one of three forms: fixed-point integers, floating-point numbers, and octal numbers. (Of course octal numbers were just an alternative notation for the fixed-point integers.)
Fixed-point integers were written simply as the decimal representation of the integers, with an optional sign. It isn't explicitly mentioned whether a plus sign is allowed in this case or if only a minus sign is, but floating-point syntax does allow an initial plus sign, so it makes sense that the fixed-point number syntax would as well.
Floating-point numbers had the syntax described by the following context-free grammar, where a term in square brackets indicates that the term is optional:
float: [sign] integer '.' [integer] exponent [sign] integer '.' integer [exponent] exponent: 'E' [sign] digit [digit] integer: digit integer digit digit: one of '0' '1' '2' '3' '4' '5' '6' '7' '8' '9' sign: one of '+' '-' 
This grammar generates things like 100.3 and 1.E5 but not things like .01 or 14E2 or 100.. The manual seems to imply that if you wrote, say, (100. 200), the period would be treated as a consing dot [the result being (cons 100 200)].
Floating-point numbers are limited in absolute value to the interval (2-128, 2128), and eight digits are significant.
Octal numbers are defined by the following grammar:
octal: [sign] octal-digits 'Q' [integer] octal-digits: octal-digit [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] octal-digit: one of '0' '1' '2' '3' '4' '5' '6' '7' 
The optional integer following 'Q' is a scale factor, which is a decimal integer representing an exponent with a base of 8. Positive octal numbers behave as one would expect: The value is shifted to the left 3×s bits, where s is the scale factor. Octal was useful on the IBM 7090, since it used thirty-six-bit words; twelve octal digits (which is the maximum allowed in an octal number in LISP 1.5) thus represent a single word in a convenient way that is more compact than binary (but still being easily convertable to and from binary). If the number has a negative sign, then the thirty-sixth bit is logically ored with 1.
The syntax of Common Lisp's numbers is a superset of that of LISP 1.5. The only major difference is in the notation of octal numbers; Common Lisp uses the sharpsign reader macro for that purpose. Because of the somewhat odd semantics of the minus sign in octal numbers in LISP 1.5, it is not necessarily trivial to convert a LISP 1.5 octal number into a Common Lisp expression resulting in the same value.
Symbol syntax. Symbol names can be up to thirty characters in length. While the actual name of a symbol was kept on its property list under the pname indicator and could be any sequence of thirty characters, the syntax accepted by the read program for symbols was limited in a few ways. First, it must not begin with a digit or with either of the characters '+' or '-', and the first two characters cannot be '$'. Otherwise, all the alphanumeric characters, along with the special characters '+', '-', '=', '*', '/', and '$'. The fact that a symbol can't begin with a sign character or a digit has to do with the number syntax; the fact that a symbol can't begin with '$$' has to do with the mechanism by which the LISP 1.5 reader allowed you to write characters that are usually not allowed in symbols, which is described next.
Two dollar signs initiated the reading of what we today might call an "escape sequence". An escape sequence had the form "$$xSx", where x was any character and S was a sequence of up to thirty characters not including x. For example, $$x()x would get the symbol whose name is '()' and would print as '()'. Thus it is similar in purpose to Common Lisp's | syntax. There is a significant difference: It could not be embedded within a symbol, unlike Common Lisp's |. In this respect it is closer to Maclisp's | reader macro (which created a single token) than it is to Common Lisp's multiple escape character. In LISP 1.5, "A$$X()X$" would be read as (1) the symbol A$$X, (2) the empty list, (3) the symbol X.
The following code sets up a $ reader macro so that symbols using the $$ notation will be read in properly, while leaving things like $eof$ alone.
(defun dollar-sign-reader (stream character) (declare (ignore character)) (let ((next (read-char stream t nil t))) (cond ((char= next #\$) (let ((terminator (read-char stream t nil t))) (values (intern (with-output-to-string (name) (loop for c := (read-char stream t nil t) until (char= c terminator) do (write-char c name))))))) (t (unread-char next stream) (with-standard-io-syntax (read (make-concatenated-stream (make-string-input-stream "$") stream) t nil t)))))) (set-macro-character #\$ #'dollar-sign-reader t) 

Conventional differences.

LISP 1.5 is an old programming language. Generally, compared to its contemporaries (such as FORTRANs I–IV), it holds up well to modern standards, but sometimes its age does show. And there were some aspects of LISP 1.5 that might be surprising to programmers familiar only with Common Lisp or a Scheme.
M-expressions. John McCarthy's original concept of Lisp was a language with a syntax like this (from the LISP 1.5 Programmer's Manual, p. 11):
equal[x;y]=[atom[x]→[atom[y]→eq[x;y]; T→F]; equal[car[x];car[Y]]→equal[cdr[x];cdr[y]]; T→F] 
There are several things to note. First is the entirely different phrase structure. It's is an infix language looking much closer to mathematics than the Lisp we know and love. Square brackets are used instead of parentheses, and semicolons are used instead of commas (or blanks). When square brackets do not enclose function arguments (or parameters when to the left of the equals sign), they set up a conditional expression; the arrows separate predicate expressions and consequent expressions.
If that was Lisp, then where do s-expressions come in? Answer: quoting. In the m-expression notation, uppercase strings of characters represent quoted symbols, and parenthesized lists represent quoted lists. Here is an example from page 13 of the manual:
λ[[x;y];cons[car[x];y]][(A B);(C D)] 
As an s-expressions, this would be
((lambda (x y) (cons (car x) y)) '(A B) '(C D)) 
The majority of the code in the manual is presented in m-expression form.
So why did s-expressions stick? There are a number of reasons. The earliest Lisp interpreter was a translation of the program for eval in McCarthy's paper introducing Lisp, which interpreted quoted data; therefore it read code in the form of s-expressions. S-expressions are much easier for a computer to parse than m-expressions, and also more consistent. (Also, the character set mentioned above includes neither square brackets nor a semicolon, let alone a lambda character.) But in publications m-expressions were seen frequently; perhaps the syntax was seen as a kind of "Lisp pseudocode".
Comments. LISP 1.5 had no built-in commenting mechanism. It's easy enough to define a comment operator in the language, but it seemed like nobody felt a need for them.
Interestingly, FORTRAN I had comments. Assembly languages of the time sort of had comments, in that they had a portion of each line/card that was ignored in which you could put any text. FORTRAN was ahead of its time.
(Historical note: The semicolon comment used in Common Lisp comes from Maclisp. Maclisp likely got it from PDP-10 assembly language, which let a semicolon and/or a line break terminate a statement; thus anything following a semicolon is ignored. The convention of octal numbers by default, decimal numbers being indicated by a trailing decimal point, of Maclisp too comes from the assembly language.)
Code formatting. The code in the manual that isn't written using m-expression syntax is generally lacking in meaningful indentation and spacing. Here is an example (p. 49):
(TH1 (LAMBDA (A1 A2 A C) (COND ((NULL A) (TH2 A1 A2 NIL NIL C)) (T (OR (MEMBER (CAR A) C) (COND ((ATOM (CAR A)) (TH1 (COND ((MEMBER (CAR A) A1) A1) (T (CONS (CAR A) A1))) A2 (CDR A) C)) (T (TH1 A1 (COND ((MEMBER (CAR A) A2) A2) (T (CONS (CAR A) A2))) (CDR A) C)))))))) 
Nowadays we might indent it like so:
(TH1 (LAMBDA (A1 A2 A C) (COND ((NULL A) (TH2 A1 A2 NIL NIL C)) (T (OR (MEMBER (CAR A) C) (COND ((ATOM (CAR A)) (TH1 (COND ((MEMBER (CAR A) A1) A1) (T (CONS (CAR A) A1))) A2 (CDR A) C)) (T (TH1 A1 (COND ((MEMBER (CAR A) A2) A2) (T (CONS (CAR A) A2))) (CDR A) C)))))))) 
Part of the lack of formatting stems probably from the primarily punched-card-based programming world of the time; you would see the indented structure only by printing a listing of your code, so there is no need to format the punched cards carefully. LISP 1.5 allowed a very free format, especially when compared to FORTRAN; the consequence is that early LISP 1.5 programs are very difficult to read because of the lack of spacing, while old FORTRAN programs are limited at least to one statement per line.
The close relationship of Lisp and pretty-printing originates in programs developed to produce nicely formatted listings of Lisp code.
Lisp code from the mid-sixties used some peculiar formatting conventions that seem odd today. Here is a quote from Steele and Gabriel's Evolution of Lisp:
This intermediate example is derived from a 1966 coding style:
DEFINE(( (MEMBER (LAMBDA (A X) (COND ((NULL X) F) ((EQ A (CAR X) ) T) (T (MEMBER A (CDR X))) ))) )) 
The design of this style appears to take the name of the function, the arguments, and the very beginning of the COND as an idiom, and hence they are on the same line together. The branches of the COND clause line up, which shows the structure of the cases considered.
This kind of indentation is somewhat reminiscent of the formatting of Algol programs in publications.
Programming style. Old LISP 1.5 programs can seem somewhat primitive. There is heavy use of the prog feature, which is related partially to the programming style that was common at the time and partially to the lack of control structures in LISP 1.5. You could express iteration only by using recursion or by using prog+go; there wasn't a built-in looping facility. There is a library function called for that is something like the early form of Maclisp's do (the later form would be inherited in Common Lisp), but no surviving LISP 1.5 code uses it. [I'm thinking of making another post about converting programs using prog to the more structured forms that Common Lisp supports, if doing so would make the logic of the program clearer. Naturally there is a lot of literature on so called "goto elimination" and doing it automatically, so it would not present any new knowledge, but it would have lots of Lisp examples.]
LISP 1.5 did not have a let construct. You would use either a prog and setq or a lambda:
(let ((x y)) ...) 
is equivalent to
((lambda (x) ...) y) 
Something that stands out immediately when reading LISP 1.5 code is the heavy, heavy use of combinations of car and cdr. This might help (though car and cdr should be left alone when they are used with dotted pairs):
(car x) = (first x) (cdr x) = (rest x) (caar x) = (first (first x)) (cadr x) = (second x) (cdar x) = (rest (first x)) (cddr x) = (rest (rest x)) (caaar x) = (first (first (first x))) (caadr x) = (first (second x)) (cadar x) = (second (first x)) (caddr x) = (third x) (cdaar x) = (rest (first (first x))) (cdadr x) = (rest (second x)) (cddar x) = (rest (rest (first x))) (cdddr x) = (rest (rest (rest x))) 
Here are some higher compositions, even though LISP 1.5 doesn't have them.
(caaaar x) = (first (first (first (first x)))) (caaadr x) = (first (first (second x))) (caadar x) = (first (second (first x))) (caaddr x) = (first (third x)) (cadaar x) = (second (first (first x))) (cadadr x) = (second (second x)) (caddar x) = (third (first x)) (cadddr x) = (fourth x) (cdaaar x) = (rest (first (first (first x)))) (cdaadr x) = (rest (first (second x))) (cdadar x) = (rest (second (first x))) (cdaddr x) = (rest (third x)) (cddaar x) = (rest (rest (first (first x)))) (cddadr x) = (rest (rest (second x))) (cdddar x) = (rest (rest (rest (first x)))) (cddddr x) = (rest (rest (rest (rest x)))) 
Things like defstruct and Flavors were many years away. For a long time, Lisp dialects had lists as the only kind of structured data, and programmers rarely defined functions with meaningful names to access components of data structures that are represented as lists. Part of understanding old Lisp code is figuring out how data structures are built up and what their components signify.
In LISP 1.5, it's fairly common to see nil used where today we'd use (). For example:
(LAMBDA NIL ...) 
instead of
(LAMBDA () ...) 
or (PROG NIL ...)
instead of
(PROG () ...) 
Actually this practice was used in other Lisp dialects as well, although it isn't really seen in newer code.
Identifiers. If you examine the list of all the symbols described in the LISP 1.5 Programmer's Manual, you will notice that none of them differ only in the characters after the sixth character. In other words, it is as if symbol names have only six significant characters, so that abcdef1 and abcdef2 would be considered equal. But it doesn't seem like that was actually the case, since there is no mention of such a limitation in the manual. Another thing of note is that many symbols are six characters or fewer in length.
(A sequence of six characters is nice to store on the hardware on which LISP 1.5 was running. The processor used thirty-six-bit words, and characters were six-bit; therefore six characters fit in a single word. It is conceivable that it might be more efficient to search for names that take only a single word to store than for names that take more than one word to store, but I don't know enough about the computer or implementation of LISP 1.5 to know if that's true.)
Even though the limit on names was thirty characters (the longest symbol names in standard Common Lisp are update-instance-for-different-class and update-instance-for-redefined-class, both thirty-five characters in length), only a few of the LISP 1.5 names are not abbreviated. Things like terpri ("terminate print") and even car and cdr ("contents of adress part of register" and "contents of decrement part of register"), which have stuck around until today, are pretty inscrutable if you don't know what they mean.
Thankfully the modern style is to limit abbreviations. Comparing the names that were introduced in Common Lisp versus those that have survived from LISP 1.5 (see the "Library" section below) shows a clear preference for good naming in Common Lisp, even at the risk of lengthy names. The multiple-value-bind operator could easily have been named mv-bind, but it wasn't.

Fundamental differences.

Truth values. Common Lisp has a single value considered to be false, which happens to be the same as the empty list. It can be represented either by the symbol nil or by (); either of these may be quoted with no difference in meaning. Anything else, when considered as a boolean, is true; however, there is a self-evaluating symbol, t, that traditionally is used as the truth value whenever there is no other more appropriate one to use.
In LISP 1.5, the situation was similar: Just like Common Lisp, nil or the empty list are false and everything else is true. But the symbol nil was used by programmers only as the empty list; another symbol, f, was used as the boolean false. It turns out that f is actually a constant whose value is nil. LISP 1.5 had a truth symbol t, like Common Lisp, but it wasn't self-evaluating. Instead, it was a constant whose permanent value was *t*, which was self-evaluating. The following code will set things up so that the LISP 1.5 constants work properly:
(defconstant *t* t) ; (eq *t* t) is true (defconstant f nil) 
Recall the practice in older Lisp code that was mentioned above of using nil in forms like (lambda nil ...) and (prog nil ...), where today we would probably use (). Perhaps this usage is related to the fact that nil represented an empty list more than it did a false value; or perhaps the fact that it seems so odd to us now is related to the fact that there is even less of a distinction between nil the empty list and nil the false value in Common Lisp (there is no separate f constant).
Function storage. In Common Lisp, when you define a function with defun, that definition gets stored somehow in the global environment. LISP 1.5 stores functions in a much simpler way: A function definition goes on the property list of the symbol naming it. The indicator under which the definition is stored is either expr or fexpr or subr or fsubr. The expr/fexpr indicators were used when the function was interpreted (written in Lisp); the subr/fsubr indicators were used when the function was compiled (or written in machine code). Functions can be referred to based on the property under which their definitions are stored; for example, if a function named f has a definition written in Lisp, we might say that "f is an expr."
When a function is interpreted, its lambda expression is what is stored. When a function is compiled or machine coded, a pointer to its address in memory is what is stored.
The choice between expr and fexpr and between subr and fsubr is based on evaluation. Functions that are exprs and subrs are evaluated normally; for example, an expr is effectively replaced by its lambda expression. But when an fexpr or an fsubr is to be processed, the arguments are not evaluated. Instead they are put in a list. The fexpr or fsubr definition is then passed that list and the current environment. The reason for the latter is so that the arguments can be selectively evaluated using eval (which took a second argument containing the environment in which evaluation is to occur). Here is an example of what the definition of an fexpr might look like, LISP 1.5 style. This function takes any number of arguments and prints them all, returning nil.
(LAMBDA (A E) (PROG () LOOP (PRINT (EVAL (CAR A) E)) (COND ((NULL (CDR A)) (RETURN NIL))) (SETQ A (CDR A)) (GO LOOP))) 
The "f" in "fexpr" and "fsubr" seems to stand for "form", since fexpr and fsubr functions got passed a whole form.
The top level: evalquote. In Common Lisp, the interpreter is usually available interactively in the form of a "Read-Evaluate-Print-Loop", for which a common abbreviation is "REPL". Its structure is exactly as you would expect from that name: Repeatedly read a form, evaluate it (using eval), and print the results. Note that this model is the same as top level file processing, except that the results of only the last form are printed, when it's done.
In LISP 1.5, the top level is not eval, but evalquote. Here is how you could implement evalquote in Common Lisp:
(defun evalquote (operator arguments) (eval (cons operator arguments))) 
LISP 1.5 programs commonly look like this (define takes a list of function definitions):
DEFINE (( (FUNCTION1 (LAMBDA () ...)) (FUNCTION2 (LAMBDA () ...)) ... )) 
which evalquote would process as though it had been written
(DEFINE ( (FUNCTION1 (LAMBDA () ...)) (FUNCTION2 (LAMBDA () ...)) ... )) 
Evaluation, scope, extent. Before further discussion, here the evaluator for LISP 1.5 as presented in Appendix B, translated from m-expressions to approximate Common Lisp syntax. This code won't run as it is, but it should give you an idea of how the LISP 1.5 interpreter worked.
(defun evalquote (function arguments) (if (atom function) (if (or (get function 'fexpr) (get function 'fsubr)) (eval (cons function arguments) nil)) (apply function arguments nil))) (defun apply (function arguments environment) (cond ((null function) nil) ((atom function) (let ((expr (get function 'expr)) (subr (get function 'subr))) (cond (expr (apply expr arguments environment)) (subr ; see below ) (t (apply (cdr (sassoc function environment (lambda () (error "A2")))) arguments environment))))) ((eq (car function 'label)) (apply (caddr function) arguments (cons (cons (cadr function) (caddr function)) arguments))) ((eq (car function) 'funarg) (apply (cadr function) arguments (caddr function))) ((eq (car function) 'lambda) (eval (caddr function) (nconc (pair (cadr function) arguments) environment))) (t (apply (eval function environment) arguments environment)))) (defun eval (form environment) (cond ((null form) nil) ((numberp form) form) ((atom form) (let ((apval (get atom 'apval))) (if apval (car apval) (cdr (sassoc form environment (lambda () (error "A8"))))))) ((eq (car form) 'quote) (cadr form)) ((eq (car form) 'function) (list 'funarg (cadr form) environment)) ((eq (car form) 'cond) (evcon (cdr form) environment)) ((atom (car form)) (let ((expr (get (car form) 'expr)) (fexpr (get (car form) 'fexpr)) (subr (get (car form) 'subr)) (fsubr (get (car form) 'fsubr))) (cond (expr (apply expr (evlis (cdr form) environment) environment)) (fexpr (apply fexpr (list (cdr form) environment) environment)) (subr ; see below ) (fsubr ; see below ) (t (eval (cons (cdr (sassoc (car form) environment (lambda () (error "A9")))) (cdr form)) environment))))) (t (apply (car form) (evlis (cdr form) environment) environment)))) (defun evcon (cond environment) (cond ((null cond) (error "A3")) ((eval (caar cond) environment) (eval (cadar cond) environment)) (t (evcon (cdr cond) environment)))) (defun evlis (list environment) (maplist (lambda (j) (eval (car j) environment)) list)) 
(The definition of evalquote earlier was a simplification to avoid the special case of special operators in it. LISP 1.5's apply can't handle special operators (which is also true of Common Lisp's apply). Hopefully the little white lie can be forgiven.)
There are several things to note about these definitions. First, it should be reiterated that they will not run in Common Lisp, for many reasons. Second, in evcon an error has been corrected; the original says in the consequent of the second branch (effectively)
(eval (cadar environment) environment) 
Now to address the "see below" comments. In the manual it describes the actions of the interpreter as calling a function called spread, which takes the arguments given in a Lisp function call and puts them into the machine registers expected with LISP 1.5's calling convention, and then executes an unconditional branch instruction after updating the value of a variable called $ALIST to the environment passed to eval or to apply. In the case of fsubr, instead of calling spread, since the function will always get two arguments, it places them directly in the registers.
You will note that apply is considered to be a part of the evaluator, while in Common Lisp apply and eval are quite different. Here it takes an environment as its final argument, just like eval. This fact highlights an incredibly important difference between LISP 1.5 and Common Lisp: When a function is executed in LISP 1.5, it is run in the environment of the function calling it. In contrast, Common Lisp creates a new lexical environment whenever a function is called. To exemplify the differences, the following code, if Common Lisp were evaluated like LISP 1.5, would be valid:
(defun weird (a b) (other-weird 5)) (defun other-weird (n) (+ a b n)) 
In Common Lisp, the function weird creates a lexical environment with two variables (the parameters a and b), which have lexical scope and indefinite extent. Since the body of other-weird is not lexically within the form that binds a and b, trying to make reference to those variables is incorrect. You can thwart Common Lisp's lexical scoping by declaring those variables to have indefinite scope:
(defun weird (a b) (declare (special a b)) (other-weird 5)) (defun other-weird (n) (declare (special a b)) (+ a b n)) 
The special declaration tells the implementation that the variables a and b are to have indefinite scope and dynamic extent.
Let's talk now about the funarg branch of apply. The function/funarg device was introduced some time in the sixties in an attempt to solve the scoping problem exemplified by the following problematic definition (using Common Lisp syntax):
(defun testr (x p f u) (cond ((funcall p x) (funcall f x)) ((atom x) (funcall u)) (t (testr (cdr x) p f (lambda () (testr (car x) p f u)))))) 
This function is taken from page 11 of John McCarthy's History of Lisp.
The only problematic part is the (car x) in the lambda in the final branch. The LISP 1.5 evaluator does little more than textual substitution when applying functions; therefore (car x) will refer to whatever x is currently bound whenever the function (lambda expression) is applied, not when it is written.
How do you fix this issue? The solution employed in LISP 1.5 was to capture the environment present when the function expression is written, using the function operator. When the evaluator encounters a form that looks like (function f), it converts it into (funarg f environment), where environment is the current environment during that call to eval. Then when apply gets a funarg form, it applies the function in the environment stored in the funarg form instead of the environment passed to apply.
Something interesting arises as a consequence of how the evaluator works. Common Lisp, as is well known, has two separate name spaces for functions and for variables. If a Common Lisp implementation encounters
(lambda (f x) (f x)) 
the result is not a function applying one of its arguments to its other argument, but rather a function applying a function named f to its second argument. You have to use an operator like funcall or apply to use the functional value of the f parameter. If there is no function named f, then you will get an error. In contrast, LISP 1.5 will eventually find the parameter f and apply its functional value, if there isn't a function named f—but it will check for a function definition first. If a Lisp dialect that has a single name space is called a "Lisp-1", and one that has two name spaces is called a "Lisp-2", then I guess you could call LISP 1.5 a "Lisp-1.5"!
How can we deal with indefinite scope when trying to get LISP 1.5 programs to run in Common Lisp? Well, with any luck it won't matter; ideally the program does not have any references to variables that would be out of scope in Common Lisp. However, if there are such references, there is a fairly simple fix: Add special declarations everywhere. For example, say that we have the following (contrived) program, in which define has been translated into defun forms to make it simpler to deal with:
(defun f (x) (prog (m) (setq m a) (setq a 7) (return (+ m b x)))) (defun g (l) (h (* b a))) (defun h (i) (/ l (f (setq b (setq a i))))) (defun p () (prog (a b i) (setq a 4) (setq b 6) (setq i 3) (return (g (f 10))))) 
The result of calling p should be 10/63. To make it work, add special declarations wherever necessary:
(defun f (x) (declare (special a b)) (prog (m) (setq m a) (setq a 7) (return (+ m b x)))) (defun g (l) (declare (special a b l)) (h (* b a))) (defun h (i) (declare (special a b l i)) (/ l (f (setq b (setq a i))))) (defun p () (prog (a b i) (declare (special a b i)) (setq a 4) (setq b 6) (setq i 3) (return (g (f 10))))) 
Be careful about the placement of the declarations. It is required that the one in p be inside the prog, since that is where the variables are bound; putting it at the beginning (i.e., before the prog) would do nothing because the prog would create new lexical bindings.
This method is not optimal, since it really doesn't help too much with understanding how the code works (although being able to see which variables are free and which are bound, by looking at the declarations, is very helpful). A better way would be to factor out the variables used among several functions (as long as you are sure that it is used in only those functions) and put them in a let. Doing that is more difficult than using global variables, but it leads to code that is easier to reason about. Of course, if a variable is used in a large number of functions, it might well be a better choice to create a global variable with defvar or defparameter.
Not all LISP 1.5 code is as bad as that example!
Join us next time as we look at the LISP 1.5 library. In the future, I think I'll make some posts talking about getting specific programs running. If you see any errors, please let me know.
submitted by kushcomabemybedtime to lisp [link] [comments]

Differences between LISP 1.5 and Common Lisp, Part 2a

Here is the first part of the second part (I ran out of characters again...) of a series of posts documenting the many differences between LISP 1.5 and Common Lisp. The preceding post can be found here.
In this part we're going to look at LISP 1.5's library of functions.
Of the 146 symbols described in The LISP 1.5 Programmer's Manual, sixty-two have the same names as standard symbols in Common Lisp. These symbols are enumerated here.
The symbols t and nil have been discussed already. The remaining symbols are operators. We can divide them into groups based on how semantics (and syntax) differ between LISP 1.5 and Common Lisp:
  1. Operators that have the same name but have quite different meanings
  2. Operators that have been extended in Common Lisp (e.g. to accept a variable number of arguments), but that otherwise have similar enough meanings
  3. Operators that have remained effectively the same
The third group is the smallest. Some functions differ only in that they have a larger domain in Common Lisp than in LISP 1.5; for example, the length function works on sequences instead of lists only. Such functions are pointed out below. All the items in this list should, given the same input, behave identically in Common Lisp and LISP 1.5. They all also have the same arity.
These are somewhat exceptional items on this list. In LISP 1.5, car and cdr could be used on any object; for atoms, the result was undefined, but there was a result. In Common Lisp, applying car and cdr to anything that is not a cons is an error. Common Lisp does specify that taking the car or cdr of nil results in nil, which was not a feature of LISP 1.5 (it comes from Interlisp).
Common Lisp's equal technically compares more things than the LISP 1.5 function, but of course Common Lisp has many more kinds of things to compare. For lists, symbols, and numbers, Common Lisp's equal is effectively the same as LISP 1.5's equal.
In Common Lisp, expt can return a complex number. LISP 1.5 does not support complex numbers (as a first class type).
As mentioned above, Common Lisp extends length to work on sequences. LISP 1.5's length works only on lists.
It's kind of a technicality that this one makes the list. In terms of functionality, you probably won't have to modify uses of return---in the situations in which it was used in LISP 1.5, it worked the same as it would in Common Lisp. But Common Lisp's definition of return is really hiding a huge difference between the two languages discussed under prog below.
As with length, this function operates on sequences and not only lists.
In Common Lisp, this function is deprecated.
LISP 1.5 defined setq in terms of set, whereas Common Lisp makes setq the primitive operator.
Of the remaining thirty-three, seven are operators that behave differently from the operators of the same name in Common Lisp:
  • apply, eval
The connection between apply and eval has been discussed already. Besides setq and prog or special or common, function parameters were the only way to bind variables in LISP 1.5 (the idea of a value cell was introduced by Maclisp); the manual describes apply as "The part of the interpreter that binds variables" (p. 17).
  • compile
In Common Lisp the compile function takes one or two arguments and returns three values. In LISP 1.5 compile takes only a single argument, a list of function names to compile, and returns that argument. The LISP 1.5 compiler would automatically print a listing of the generated assembly code, in the format understood by the Lisp Assembly Program or LAP. Another difference is that compile in LISP 1.5 would immediately install the compiled definitions in memory (and store a pointer to the routine under the subr or fsubr indicators of the compiled functions).
  • count, uncount
These have nothing to do withss Common Lisp's count. Instead of counting the number of items in a collection satisfying a certain property, count is an interface to the "cons counter". Here's what the manual says about it (p. 34):
The cons counter is a useful device for breaking out of program loops. It automatically causes a trap when a certain number of conses have been performed.
The counter is turned on by executing count[n], where n is an integer. If n conses are performed before the counter is turned off, a trap will occur and an error diagnostic will be given. The counter is turned off by uncount[NIL]. The counter is turned on and reset each time count[n] is executed. The counter can be turned on so as to continue counting from the state it was in when last turned off by executing count[NIL].
This counting mechanism has no real counterpart in Common Lisp.
  • error
In Common Lisp, error is part of the condition system, and accepts a variable number of arguments. In LISP 1.5, it has a single, optional argument, and of course LISP 1.5 had no condition system. It had errorset, which we'll discuss later. In LISP 1.5, executing error would cause an error diagnostic and print its argument if given. While this is fairly similar to Common Lisp's error, I'm putting it in this section since the error handling capabilities of LISP 1.5 are very limited compared to those of Common Lisp (consider that this was one of the only ways to signal an error). Uses of error in LISP 1.5 won't necessarily run in Common Lisp, since LISP 1.5's error accepted any object as an argument, while Common Lisp's error needs designators for a simple-error condition. An easy conversion is to change (error x) into (error "~A" x).
  • map
This function is quite different from Common Lisp's map. The incompatibility is mentioned in Common Lisp: The Language:
In MacLisp, Lisp Machine Lisp, Interlisp, and indeed even Lisp 1.5, the function map has always meant a non-value-returning version. However, standard computer science literature, including in particular the recent wave of papers on "functional programming," have come to use map to mean what in the past Lisp implementations have called mapcar. To simplify things henceforth, Common Lisp follows current usage, and what was formerly called map is named mapl in Common Lisp.
But even mapl isn't the same as map in LISP 1.5, since mapl returns the list it was given and LISP 1.5's map returns nil. Actually there is another, even larger incompatibility that isn't mentioned: The order of the arguments is different. The first argument of LISP 1.5's map was the list to be mapped and the second argument was the function to map over it. (The order was changed in Maclisp, likely because of the extension of the mapping functions to multiple lists.) You can't just change all uses of map to mapl because of this difference. You could define a function like map-1.5,such as
(defun map-1.5 (list function) (mapl function list) nil) 
and replace map with map-1.5 (or just shadow the name map).
  • function
This operator has been discussed earlier in this post.
Common Lisp doesn't need anything like LISP 1.5's function. However, mostly by coincidence, it will tolerate it in many cases; in particular, it works with lambda expressions and with references to global function definitions.
  • search
This function isn't really anything like Common Lisp's search. Here is how it is defined in the manual (p. 63, converted from m-expressions into Common Lisp syntax):
(defun search (x p f u) (cond ((null x) (funcall u x)) ((p x) (funcall f x)) (t (search (cdr x) p f u)))) 
Somewhat confusingly, the manual says that it searches "for an element that has the property p"; one might expect the second branch to test (get x p).
The function is kind of reminiscent of the testr function, used to exemplify LISP 1.5's indefinite scoping in the previous part.
  • special, unspecial
LISP 1.5's special variables are pretty similar to Common Lisp's special variables—but only because all of LISP 1.5's variables are pretty similar to Common Lisp's special variables. The difference between regular LISP 1.5 variables and special variables is that symbols declared special (using this special special special operator) have a value on their property list under the indicator special, which is used by the compiler when no binding exists in the current environment. The interpreter knew nothing of special variables; thus they could be used only in compiled functions. Well, they could be used in any function, but the interpreter wouldn't find the special value. (It appears that this is where the tradition of Lisp dialects having different semantics when compiled versus when interpreted began; eventually Common Lisp would put an end to the confusion.)
You can generally change special into defvar and get away fine. However there isn't a counterpart to unspecial. See also common.
Now come the operators that are essentially the same in LISP 1.5 and in Common Lisp, but have some minor differences.
  • append
The LISP 1.5 function takes only two arguments, while Common Lisp allows any number.
  • cond
In Common Lisp, when no test in a cond form is true, the result of the whole form is nil. In LISP 1.5, an error was signaled, unless the cond was contained within a prog, in which case it would quietly do nothing. Note that the cond must be at the "top level" inside the prog; cond forms at any deeper level will error if no condition holds.
  • gensym
The LISP 1.5 gensym function takes no arguments, while the Common Lisp function does.
  • get
Common Lisp's get takes three arguments, the last of which is a value to return if the symbol does not have the indicator on its property list; in LISP 1.5 get has no such third argument.
  • go
In LISP 1.5 go was allowed in only two contexts: (1) at the top level of a prog; (2) within a cond form at the top level of a prog. Later dialects would loosen this restriction, leading to much more complicated control structures. While progs in LISP 1.5 were somewhat limited, it is at least fairly easy to tell what's going on (e.g. loop conditions). Note that return does not appear to be limited in this way.
  • intern
In Common Lisp, intern can take a second argument specifying in what package the symbol is to be interned, but LISP 1.5 does not have packages. Additionally, the required argument to intern is a string in Common Lisp; LISP 1.5 doesn't really have strings, and so intern instead wants a pointer to a list of full words (of packed BCD characters; the print names of symbols were stored in this way).
  • list
In Common Lisp, list can take any number of arguments, including zero, but in LISP 1.5 it seems that it must be given at least one argument.
  • load
In LISP 1.5, load can't be given a filespec as an argument, for many reason. Actually, it can't be given anything as an argument; its purpose is simply to hand control over to the loader. The loader "expects octal correction cards, 704 row binary cards, and a transfer card." If you have the source code that would be compiled into the material to be loaded, then you can just put it in another file and use Common Lisp's load to load it in. But if you don't have the source code, then you're out of luck.
  • mapcon, maplist
The differences between Common Lisp and LISP 1.5 regarding these functions are similar to those for map given above. Both of these functions returned nil in LISP 1.5, and they took the list to be mapped as their first argument and the function to map as their second argument. A major incompatibility to note is that maplist in LISP 1.5 did what mapcar in Common Lisp does; Common Lisp's maplist is different.
  • member
In LISP 1.5, member takes none of the fancy keyword arguments that Common Lisp's member does, and returns only a truth value, not the tail of the list.
  • nconc
In LISP 1.5, this function took only two arguments; in Common Lisp, it takes any number.
  • prin1, print, terpri
In Common Lisp, these functions take an optional argument specifying an output stream to which they will send their output, but in LISP 1.5 prin1 and print take just one argument, and terpri takes no arguments.
  • prog
In LISP 1.5, the list of program variables was just that: a list of variables. No initial values could be provided as they can in Common Lisp; all the program variables started out bound to nil. Note that the program variables are just like any other variables in LISP 1.5 and have indefinite scope.
In the late '70s and early '80s, the maintainers of Maclisp and Lisp Machine Lisp wanted to add "naming" abilities to prog. You could say something like
(prog outer () ... (prog () (return ... outer))) 
and the return would jump not just out of the inner prog, but also out of the outer one. However, they ran into a problem with integrating a named prog with parts of the language that were based on prog. For example, they could add a special case to dotimes to handle an atomic first argument, since regular dotimes forms had a list as their first argument. But Maclisp's do had two forms: the older (introduced in 1969) form
(do atom initial step-form end-test body...) 
and the newer form, which was identical to Common Lisp's do. The older form was equivalent to
(do ((atom intitial step-form)) (end-test) body...) 
Since the older form was still supported, they couldn't add a special case for an atomic first argument because that was the normal case of the older kind of do. They ended up not adding named prog, owing to these kinds of difficulties.
However, during the discussion of how to make named prog work, Kent Pitman sent a message that contained the following text:
I now present my feelings on this issue of how DO/PROG could be done in order this haggling, part of which I think comes out of the fact that these return tags are tied up in PROG-ness and so on ... Suppose you had the following primitives in Lisp: (PROG-BODY ...) which evaluated all non-atomic stuff. Atoms were GO-tags. Returns () if you fall off the end. RETURN does not work from this form. (PROG-RETURN-POINT form name) name is not evaluated. Form is evaluated and if a RETURN-FROM specifying name (or just a RETURN) were executed, control would pass to here. Returns the value of form if form returns normally or the value returned from it if a RETURN or RETURN-FROM is executed. [Note: this is not a [*]CATCH because it is lexical in nature and optimized out by the compiler. Also, a distinction between NAMED-PROG-RETURN-POINT and UNNAMED-PROG-RETURN-POINT might be desirable – extrapolate for yourself how this would change things – I'll just present the basic idea here.] (ITERATE bindings test form1 form2 ...) like DO is now but doesn't allow return or goto. All forms are evaluated. GO does not work to get to any form in the iteration body. So then we could just say that the definitions for PROG and DO might be (ignore for now old-DO's – they could, of course, be worked in if people really wanted them but they have nothing to do with this argument) ... (PROG [  ]  . ) => (PROG-RETURN-POINT (LET  (PROG-BODY . )) [  ]) (DO [  ]   . ) => (PROG-RETURN-POINT (ITERATE   (PROG-BODY . )) [  ]) Other interesting combinations could be formed by those interested in them. If these lower-level primitives were made available to the user, he needn't feel tied to one of PROG/DO – he can assemble an operator with the functionality he really wants. 
Two years later, Pitman would join the team developing the Common Lisp language. For a little while, incorporating named prog was discussed, which eventually led to the splitting of prog in quite a similar way to Pitman's proposal. Now prog is a macro, simply combining the three primitive operators let, block, and tagbody. The concept of the tagbody primitive in its current form appears to have been introduced in this message, which is a writeup by David Moon of an idea due to Alan Bawden. In the message he says
The name could be GO-BODY, meaning a body with GOs and tags in it, or PROG-BODY, meaning just the inside part of a PROG, or WITH-GO, meaning something inside of which GO may be used. I don't care; suggestions anyone?
Guy Steele, in his proposed evaluator for Common Lisp, called the primitive tagbody, which stuck. It is a little bit more logical than go-body, since go is just an operator and allowed anywhere in Common Lisp; the only special thing about tagbody is that atoms in its body are treated as tags.
  • prog2
In LISP 1.5, prog2 was really just a function that took two arguments and returned the result of the evaluation of the second one. The purpose of it was to avoid having to write (prog () ...) everywhere when all you want to do is call two functions. In later dialects, progn would be introduced and the "implicit progn" feature would remove the need for prog2 used in this way. But prog2 stuck around and was generalized to a special operator that evaluated any number of forms, while holding on to the result of the second one. Programmers developed the (prog2 nil ...) idiom to save the result of the first of several forms; later prog1 was introduced, making the idiom obsolete. Nowadays, prog1 and prog2 are used typically for rather special purposes.
Regardless, in LISP 1.5 prog2 was machine-coded subroutine that was equivalent to the following function definition in Common Lisp:
(defun prog2 (one two) two) 
  • read
The read function in LISP 1.5 did not take any arguments; Common Lisp's read takes four. In LISP 1.5, read read either from "SYSPIT" or from the punched carded reader. It seems that SYSPIT stood for "SYStem Paper (maybe Punched) Input Tape", and that it designated a punched tape reader; alternatively, it might designate a magnetic tape reader, but the manual makes reference to punched cards. But more on input and output later.
  • remprop
The only difference between LISP 1.5's remprop and Common Lisp's remprop is that the value of LISP 1.5's remprop is always nil.
  • setq
In Common Lisp, setq takes an arbitrary even number of arguments, representing pairs of symbols and values to assign to the variables named by the symbols. In LISP 1.5, setq takes only two arguments.
  • sublis
LISP 1.5's sublis and subst do not take the keyword arguments that Common Lisp's sublis and subst take.
  • trace, untrace
In Common Lisp, trace and untrace are operators that take any number of arguments and trace the functions named by them. In LISP 1.5, both trace and untrace take a single argument, which is a list of the functions to trace.

Functions not in Common Lisp

We turn now to the symbols described in the LISP 1.5 Programmer's Manual that don't appear in Common Lisp. Let's get the easiest case out of the way first: Here are all the operators in LISP 1.5 that have a corresponding operator in Common Lisp, with notes about differences in functionality where appropriate.
  • add1, sub1
These functions are the same as Common Lisp's 1+ and 1- in every way, down to the type genericism.
  • conc
This is just Common Lisp's append, or LISP 1.5's append extended to more than two arguments.
  • copy
Common Lisp's copy-list function does the same thing.
  • difference
This corresponds to -, although difference takes only two arguments.
  • divide
This function takes two arguments and is basically a consing version of Common Lisp's floor:
(divide x y) = (values-list (floor x y)) 
  • digit
This function takes a single argument, and is like Common Lisp's digit-char-p except that the radix isn't variable, and it returns a true or false value only (and not the weight of the digit).
  • efface
This function deletes the first appearance of an item from a list. A call like (efface item list) is equivalent to the Common Lisp code (delete item list :count 1).
  • greaterp, lessp
These correspond to Common Lisp's > and <, although greaterp and lessp take only two arguments.
As a historical note, the names greaterp and lessp survived in Maclisp and Lisp Machine Lisp. Both of those languages had also > and <, which were used for the two-argument case; Common Lisp favored genericism and went with > and < only. However, a vestige of the old predicates still remains, in the lexicographic ordering functions: char-lessp, char-greaterp, string-lessp, string-greaterp.
  • minus
This function takes a single argument and returns its negation; it is equivalent to the one-argument case of Common Lisp's -.
  • leftshift
This function is the same as ash in Common Lisp; it takes two arguments, m and n, and returns m×2n. Thus if the second argument is negative, the shift is to the right instead of to the left.
  • liter
This function is identical in essence to Common Lisp's alpha-char-p, though more precisely it's closer to upper-case-p; LISP 1.5 was used on computers that made no provision for lowercase characters.
  • pair
This is equivalent to the normal, two-argument case of Common Lisp's pairlis.
  • plus
This function takes any number of arguments and returns their sum; its Common Lisp counterpart is +.
  • quotient
This function is equivalent to Common Lisp's /, except that quotient takes only two arguments.
  • recip
This function is equivalent to the one-argument case of Common Lisp's /.
  • remainder
This function is equivalent to Common Lisp's rem.
  • times
This function takes any number of arguments and returns their product; its Common Lisp counterpart is *.
Part 2b will be posted in a few hours probably.
submitted by kushcomabemybedtime to lisp [link] [comments]

Wall Street Week Ahead for the trading week beginning December 9th, 2019

Good Saturday morning to all of you here on wallstreetbets. I hope everyone on this sub made out pretty nicely in the market this past week, and is ready for the new trading week ahead.
Here is everything you need to know to get you ready for the trading week beginning December 9th, 2019.

What Trump does before trade deadline is the ‘wild card’ that will drive markets in the week ahead - (Source)

The Trump administration’s Dec. 15 deadline for new tariffs on China looms large, and while most strategists expect them to be delayed while talks continue, they don’t rule out the unexpected.
“That’s the biggest thing in the room next week. I don’t think he’s going to raise them. I think they’ll find a reason,” said James Pauslen, chief investment strategist at Leuthold Group. But Paulsen said President Donald Trump’s unpredictable nature makes it really impossible to tell what will happen as the deadline nears.
“He’s the one off you’re never sure about. It’s not just tariffs. It could be damn near anything,” Paulsen said. “I think he goes out of his way to be a wild card.”
Just in the past week, Trump said he would put new tariffs on Brazil, Argentina and France. He rattled markets when he said he could wait until after the election for a trade deal with China.
Once dubbing himself “tariff man,” Trump reminded markets that he sees tariffs as a way of getting what he wants from an opponent, and traders were reminded tariffs may be around for a long time.
Trade certainly could be the most important event for markets in the week ahead, which also includes a Fed interest rate decision Wednesday and the U.K.’s election that could set the course for Brexit. If there’s no China deal, that could beat up stocks, send Treasury yields lower and send investors into other safe havens.
When Fed officials meet this week, they are not expected to change interest rates, but they are likely to discuss whether they believe their repo operations to drive liquidity in the short-term funding market are running smoothly, ahead of year end. Economic reports in the coming week include CPI inflation Wednesday, which could be an important input for the Fed.
Punt, but no deal As of Friday, the White House did not appear any closer to striking a deal with China, though officials say talks are going fine. Back in August, Trump said if there is no deal, Dec. 15 is the date for a new wave of tariffs on $156 billion in Chinese goods, including cell phones, toys and lap top computers.
Dan Clifton, head of policy research at Strategas, said it seems like a low probability there will be a deal in the coming week. “What the market is focused on right now is whether there’s going to be tariffs that to into effect on Dec. 15, or not. It’s being rated pretty binary,” said Clifton. “I think what’s happening here and the actions by China overnight looks like we’re setting up for a kick.”
China removed some tariffs from U.S. agricultural products Friday, and administration officials have been talking about discussions going fine.
Clifton said if tariffs are put on hold, it’s unclear for how long. “Those are going to be larger questions that have to be answered. This is really now about politics. Is it a better idea for the president to cut a deal without major structural reforms, or should he walk away? That’s the larger debate that has to happen after Dec. 15,” Clifton said. “I’m getting worried that some in the administration... they’re leaning toward no deal category.”
Clifton said Trump’s approval rating falls when the trade wars heat up, so that may motivate him to complete the deal with China even if he doesn’t get everything he wants.
Michael Schumacher, director of rates strategy at Wells Fargo, said his base case is for a trade deal to be signed in the next couple of months, but even so, he said he can’t entirely rule out another outcome. It would make sense for tariffs to be put on hold while talks continue.
“The tweeter-in-chief controls that one, ” said Schumacher. “That’s anybody’s guess...I wouldn’t be at all surprised if he suspends it for a few weeks. If he doesn’t, that’s a pretty unpleasant result. That’s risk off. That’s pretty clear.”
Because the next group of tariffs would be on consumer goods, economists fear they could hit the economy through the consumer, the strongest and largest engine behind economic growth.
Fed ahead The Fed has moved to the sidelines and says it is monitoring economic data before deciding its next move. Friday’s strong November jobs report, with 266,000 jobs added, reinforces the Fed’s decision to move to neutral for now.
So the most important headlines from its meeting this week could be about the repo market, basically the plumbing for the financial system where financial institutions fund themselves. Interest rates in that somewhat obscure market spiked in September. Market pros said the issue was a cash crunch in the short term lending market, made better when the Fed started repo operations.
The Fed now has multiple operations running over year end, and Schumacher said it has latitude to do more. Strategists expect there to be more pressure on the repo market as banks rein in operations to spruce up their balance sheets at year end.
“No one is going to come to the Fed and say you did too much in the year-end funding,” said Schumacher. “If repo happens to spike somewhat on one day, the Fed is going to hammer it the next day.”
Paulsen said the markets will be attuned to this week’s inflation numbers. Consumer inflation, the CPI is reported on Wednesday and producer prices are Thursday.
A pickup in inflation of any significance is one thing that could pull the Fed from the sidelines, and prod it to consider a rate hike.
“I think the inflation reports might start to get a little attention. Given the jobs numbers, the employment rate, growth picking up a little bit and a better tone in manufacturing. I do think if you get some hot CPI number, I don’t know if the Fed can ignore it,” he said. “Core CPI is 2.3%.” He said it would get noticed if it jumped to 2.5% or better.
The Fed’s inflation target is 2% but its preferred measure is the PCE inflation, and that remains under 2%.
Stocks were sharply higher Friday but ended the past week flattish. The S&P 500 was slightly higher, up 0.2% at 3,145, and the Dow was down 0.1% at 28,015. The Nasdaq was 0.1% lower, ending the week at 8,656.

This past week saw the following moves in the S&P:

(CLICK HERE FOR THE FULL S&P TREE MAP FOR THE PAST WEEK!)

Major Indices for this past week:

(CLICK HERE FOR THE MAJOR INDICES FOR THE PAST WEEK!)

Major Futures Markets as of Friday's close:

(CLICK HERE FOR THE MAJOR FUTURES INDICES AS OF FRIDAY!)

Economic Calendar for the Week Ahead:

(CLICK HERE FOR THE FULL ECONOMIC CALENDAR FOR THE WEEK AHEAD!)

Sector Performance WTD, MTD, YTD:

(CLICK HERE FOR FRIDAY'S PERFORMANCE!)
(CLICK HERE FOR THE WEEK-TO-DATE PERFORMANCE!)
(CLICK HERE FOR THE MONTH-TO-DATE PERFORMANCE!)
(CLICK HERE FOR THE 3-MONTH PERFORMANCE!)
(CLICK HERE FOR THE YEAR-TO-DATE PERFORMANCE!)
(CLICK HERE FOR THE 52-WEEK PERFORMANCE!)

Percentage Changes for the Major Indices, WTD, MTD, QTD, YTD as of Friday's close:

(CLICK HERE FOR THE CHART!)

S&P Sectors for the Past Week:

(CLICK HERE FOR THE CHART!)

Major Indices Pullback/Correction Levels as of Friday's close:

(CLICK HERE FOR THE CHART!

Major Indices Rally Levels as of Friday's close:

(CLICK HERE FOR THE CHART!)

Most Anticipated Earnings Releases for this week:

(CLICK HERE FOR THE CHART!)

Here are the upcoming IPO's for this week:

(CLICK HERE FOR THE CHART!)

Friday's Stock Analyst Upgrades & Downgrades:

(CLICK HERE FOR THE CHART LINK #1!)
(CLICK HERE FOR THE CHART LINK #2!)

Reasons We Still Believe In December

It has been a rough start to the most wonderful month of them all, with the S&P 500 Index down each of the first two days of December. Don’t stop believing just yet, though.
Everyone knows December has usually been a good month for stocks, but what happened last year is still fresh in the minds of many investors. The S&P 500 fell 9.1% in December 2018 for the worst December since 1931. That sounds really bad, until you realize stocks fell 30% in September 1931, but we digress.
One major difference between now and last year is how well the global equities have been performing. Heading into December 2018, the S&P 500 was up 3.2% year to date, but markets outside of the United States were already firmly in the red, with many down double digits.
“We don’t think stocks are on the verge of another massive December sell off,” said LPL Financial Senior Market Strategist Ryan Detrick. “If my Cincinnati Bengals can win a game, anything is possible. However, we are quite encouraged by the overall participation we are seeing from various global stock markets this year versus last year, when the United States was about the only market in the green heading into December.”
Stocks have also overcome volatile starts to December recently. The S&P 500 was down four days in a row to start 2013 and 2017, but the gauge still managed to gain 2.4% and 1%, respectively, in those years.
As the LPL Chart of the Day shows, December has been the second-best month of the year for stocks going back to 1950. It is worth noting that it was the best month of the year before last year’s massive drop. Stocks have historically been strong in pre-election years as well, and December has never been lower two times in a row during a pre-election year. Given stocks fell in December 2015, bulls could be smiling when this month is wrapped up.
(CLICK HERE FOR THE CHART!)

Could Impeachment Be Good for Investors?

Impeaching a President with the possibility of removal from office is by no means great for the country. However, it may not be so horrible for the stock market or investors if history is any guide. We first touched on this over two years ago here on the blog and now that much has transpired and the US House of Representatives is now proceeding with drafting articles of impeachment we figured it was a good time to revisit the history (albeit limited) of market behavior during presidential impeachment proceedings. The three charts below really tell the story.
During the Watergate scandal of Nixon’s second term the market suffered a major bear market from January 1973 to OctobeDecember 1974 with the Dow down 45.1%, S&P 500 down 48.2% and NASDAQ down 59.9%. Sure there were other factors that contributed to the bear market such as the Oil Embargo, Arab-Israeli War, collapse of the Bretton Woods system, high inflation and Watergate. However, shortly after Nixon resigned on August 9, 1974 the market reached the secular bear market low on October 3 for S&P and NASDAQ and December 6 for the Dow.
Leading up to the Clinton investigations and through his subsequent impeachment and the acquittal by the Senate the market was on a tear as one of the biggest bull markets in history raged on. After the 1994 midterm elections when the Republicans took back control of both houses of Congress the market remained on a 45 degree upward trajectory except for a few blips and the shortest bear market on record that lasted 45 days and bottomed on August 31, 1998.
Clinton was impeached in December 1998 and acquitted in February 1999 as the market continued higher throughout his second term. Sure there were other factors that contributed to the late-1990s bull-run such as the Dotcom Boom, the Information Revolution, millennial fervor and a booming global economy, but Clinton’s personal scandal had little negative impact on markets.
It remains to be seen of course what will happen with President Trump’s impeachment proceeding and how the world and markets react, but the market continues to march on. If the limited history of impeachment proceedings of a US President in modern times (no offense to our 17th President Andrew Johnson) is any guide, the market has bounced back after the last two impeachment proceedings and was higher a year later. Perhaps it will be better to buy any impeachment dip rather than sell it.
(CLICK HERE FOR THE CHART LINK #1!)
(CLICK HERE FOR THE CHART LINK #2!!)
(CLICK HERE FOR THE CHART LINK #3!!)

Typical December Trading: Modest Strength Early, Choppy Middle and Solid Gains Late

Historically, the first trading day of December, today, has a slightly bearish bias with S&P 500 advancing 34 times over the last 69 years (since 1950) with an average loss of 0.02%. Tomorrow, the second trading day of December however, has been stronger, up 52.2% of the time since 1950 with an average gain of 0.08% and the third day is better still, up 59.4% of the time.
Over the more recent 21-year period, December has opened with strength and gains over its first seven trading days before beginning to drift. By mid-month all five indices have surrendered any early-month gains, but shortly thereafter Santa usually visits sending the market higher until the last day of the month and the year when last minute selling, most likely for tax reasons, briefly interrupts the market’s rally.
(CLICK HERE FOR THE CHART!)

Odds Still Favor A Gain for Rest of December Despite Rough Start

Just when it was beginning to look like trade was heading in a positive direction, the wind changed direction again. Yesterday it was steel and aluminum tariffs on Brazil and Argentina and today a deal with China may not happen as soon as previously anticipated. The result was the worst first two trading days of December since last year and the sixth worst start since 1950 for S&P 500. DJIA and NASDAQ are eighth worst since 1950 and 1971, respectively.
However, historically past weakness in early December (losses over the first two trading days combined) were still followed by average gains for the remainder of the month the majority of the time. DJIA has advanced 74.19% of the time following losses over the first two trading days with an average gain for the remainder of December of 1.39%. S&P 500 was up 67.65% of the time with an average rest of month gain of 0.84%. NASDAQ is modestly softer advancing 61.11% of the time during the remainder of December with an average advance of 0.30%.
(CLICK HERE FOR THE CHART LINK #1!)
(CLICK HERE FOR THE CHART LINK #2!)
(CLICK HERE FOR THE CHART LINK #3!)

STOCK MARKET VIDEO: Stock Market Analysis Video for Week Ending December 6th, 2019

(CLICK HERE FOR THE YOUTUBE VIDEO!)

STOCK MARKET VIDEO: ShadowTrader Video Weekly 12.8.19

([CLICK HERE FOR THE YOUTUBE VIDEO!]())
(VIDEO NOT YET POSTED!)
Here are the most notable companies (tickers) reporting earnings in this upcoming trading week ahead-
  • $LULU
  • $COST
  • $THO
  • $AZO
  • $ADBE
  • $AVGO
  • $CIEN
  • $MDB
  • $CHWY
  • $SFIX
  • $AEO
  • $GME
  • $OLLI
  • $TOL
  • $PLCE
  • $UNFI
  • $PLAY
  • $ORCL
  • $HDS
  • $CONN
  • $MTN
  • $JT
  • $LOVE
  • $CMD
  • $PLAB
  • $DBI
  • $ROAD
  • $VRA
  • $CDMO
  • $LQDT
  • $TLRD
  • $TWST
  • $PHR
  • $NDSN
  • $MESA
  • $VERU
  • $DLHC
  • $BLBD
  • $OXM
  • $NX
  • $GNSS
  • $PHX
  • $GTIM
(CLICK HERE FOR NEXT WEEK'S MOST NOTABLE EARNINGS RELEASES!)
(CLICK HERE FOR NEXT WEEK'S HIGHEST VOLATILITY EARNINGS RELEASES!)
(CLICK HERE FOR MOST ANTICIPATED EARNINGS RELEASES FOR THE NEXT 5 WEEKS!)
Below are some of the notable companies coming out with earnings releases this upcoming trading week ahead which includes the date/time of release & consensus estimates courtesy of Earnings Whispers:

Monday 12.9.19 Before Market Open:

(CLICK HERE FOR MONDAY'S PRE-MARKET EARNINGS TIME & ESTIMATES!)

Monday 12.9.19 After Market Close:

(CLICK HERE FOR MONDAY'S AFTER-MARKET EARNINGS TIME & ESTIMATES!)

Tuesday 12.10.19 Before Market Open:

(CLICK HERE FOR TUESDAY'S PRE-MARKET EARNINGS TIME & ESTIMATES!)

Tuesday 12.10.19 After Market Close:

(CLICK HERE FOR TUESDAY'S AFTER-MARKET EARNINGS TIME & ESTIMATES!)

Wednesday 12.11.19 Before Market Open:

(CLICK HERE FOR WEDNESDAY'S PRE-MARKET EARNINGS TIME & ESTIMATES!)

Wednesday 12.11.19 After Market Close:

(CLICK HERE FOR WEDNESDAY'S AFTER-MARKET EARNINGS TIME & ESTIMATES!)

Thursday 12.12.19 Before Market Open:

(CLICK HERE FOR THURSDAY'S PRE-MARKET EARNINGS TIME & ESTIMATES!)

Thursday 12.12.19 After Market Close:

(CLICK HERE FOR THURSDAY'S AFTER-MARKET EARNINGS TIME & ESTIMATES!)

Friday 12.13.19 Before Market Open:

([CLICK HERE FOR FRIDAY'S PRE-MARKET EARNINGS TIME & ESTIMATES!]())
NONE.

Friday 12.13.19 After Market Close:

([CLICK HERE FOR FRIDAY'S AFTER-MARKET EARNINGS TIME & ESTIMATES!]())
NONE.

lululemon athletica inc. $229.38

lululemon athletica inc. (LULU) is confirmed to report earnings at approximately 4:05 PM ET on Wednesday, December 11, 2019. The consensus earnings estimate is $0.93 per share on revenue of $896.50 million and the Earnings Whisper ® number is $0.98 per share. Investor sentiment going into the company's earnings release has 73% expecting an earnings beat The company's guidance was for earnings of $0.90 to $0.92 per share on revenue of $880.00 million to $890.00 million. Consensus estimates are for year-over-year earnings growth of 24.00% with revenue increasing by 19.91%. Short interest has increased by 9.8% since the company's last earnings release while the stock has drifted higher by 16.0% from its open following the earnings release to be 26.0% above its 200 day moving average of $182.08. Overall earnings estimates have been revised higher since the company's last earnings release. On Friday, December 6, 2019 there was some notable buying of 927 contracts of the $260.00 call expiring on Friday, December 13, 2019. Option traders are pricing in a 8.3% move on earnings and the stock has averaged a 11.1% move in recent quarters.

(CLICK HERE FOR THE CHART!)

Costco Wholesale Corp. $294.95

Costco Wholesale Corp. (COST) is confirmed to report earnings at approximately 4:15 PM ET on Thursday, December 12, 2019. The consensus earnings estimate is $1.70 per share on revenue of $37.43 billion and the Earnings Whisper ® number is $1.74 per share. Investor sentiment going into the company's earnings release has 78% expecting an earnings beat. Consensus estimates are for year-over-year earnings growth of 5.59% with revenue increasing by 6.73%. Short interest has increased by 19.3% since the company's last earnings release while the stock has drifted higher by 2.5% from its open following the earnings release to be 10.3% above its 200 day moving average of $267.50. Overall earnings estimates have been revised higher since the company's last earnings release. On Tuesday, November 19, 2019 there was some notable buying of 916 contracts of the $265.00 put expiring on Friday, December 27, 2019. Option traders are pricing in a 3.7% move on earnings and the stock has averaged a 3.6% move in recent quarters.

(CLICK HERE FOR THE CHART!)

Thor Industries, Inc. $67.77

Thor Industries, Inc. (THO) is confirmed to report earnings at approximately 6:45 AM ET on Monday, December 9, 2019. The consensus earnings estimate is $1.23 per share on revenue of $2.30 billion and the Earnings Whisper ® number is $1.30 per share. Investor sentiment going into the company's earnings release has 69% expecting an earnings beat. Consensus estimates are for earnings to decline year-over-year by 16.89% with revenue increasing by 30.98%. Short interest has increased by 48.1% since the company's last earnings release while the stock has drifted higher by 25.5% from its open following the earnings release to be 16.0% above its 200 day moving average of $58.44. Overall earnings estimates have been revised lower since the company's last earnings release. On Tuesday, December 3, 2019 there was some notable buying of 838 contracts of the $60.00 put expiring on Friday, December 20, 2019. Option traders are pricing in a 10.0% move on earnings and the stock has averaged a 7.6% move in recent quarters.

(CLICK HERE FOR THE CHART!)

AutoZone, Inc. -

AutoZone, Inc. (AZO) is confirmed to report earnings at approximately 6:55 AM ET on Tuesday, December 10, 2019. The consensus earnings estimate is $13.69 per share on revenue of $2.76 billion and the Earnings Whisper ® number is $14.02 per share. Investor sentiment going into the company's earnings release has 76% expecting an earnings beat. Consensus estimates are for year-over-year earnings growth of 1.63% with revenue increasing by 4.48%. Short interest has decreased by 13.7% since the company's last earnings release while the stock has drifted higher by 1.1% from its open following the earnings release to be 8.9% above its 200 day moving average of $1,077.00. Overall earnings estimates have been revised lower since the company's last earnings release. Option traders are pricing in a 5.5% move on earnings and the stock has averaged a 5.6% move in recent quarters.

(CLICK HERE FOR THE CHART!)

Adobe Inc. $306.23

Adobe Inc. (ADBE) is confirmed to report earnings at approximately 4:05 PM ET on Thursday, December 12, 2019. The consensus earnings estimate is $2.26 per share on revenue of $2.97 billion and the Earnings Whisper ® number is $2.30 per share. Investor sentiment going into the company's earnings release has 74% expecting an earnings beat The company's guidance was for earnings of approximately $2.25 per share. Consensus estimates are for year-over-year earnings growth of 23.50% with revenue increasing by 20.51%. Short interest has increased by 44.6% since the company's last earnings release while the stock has drifted higher by 11.2% from its open following the earnings release to be 9.1% above its 200 day moving average of $280.60. Overall earnings estimates have been revised higher since the company's last earnings release. On Monday, November 25, 2019 there was some notable buying of 505 contracts of the $340.00 call expiring on Friday, December 20, 2019. Option traders are pricing in a 3.9% move on earnings and the stock has averaged a 3.8% move in recent quarters.

(CLICK HERE FOR THE CHART!)

Broadcom Limited $316.05

Broadcom Limited (AVGO) is confirmed to report earnings at approximately 4:15 PM ET on Thursday, December 12, 2019. The consensus earnings estimate is $5.36 per share on revenue of $5.76 billion and the Earnings Whisper ® number is $5.47 per share. Investor sentiment going into the company's earnings release has 69% expecting an earnings beat. Consensus estimates are for earnings to decline year-over-year by 7.27% with revenue increasing by 5.80%. Short interest has increased by 22.8% since the company's last earnings release while the stock has drifted higher by 6.2% from its open following the earnings release to be 9.7% above its 200 day moving average of $288.21. Overall earnings estimates have been revised lower since the company's last earnings release. On Thursday, December 5, 2019 there was some notable buying of 625 contracts of the $135.00 call expiring on Friday, January 15, 2021. Option traders are pricing in a 5.2% move on earnings and the stock has averaged a 4.7% move in recent quarters.

(CLICK HERE FOR THE CHART!)

Ciena Corporation $35.00

Ciena Corporation (CIEN) is confirmed to report earnings at approximately 7:00 AM ET on Thursday, December 12, 2019. The consensus earnings estimate is $0.66 per share on revenue of $964.80 million and the Earnings Whisper ® number is $0.67 per share. Investor sentiment going into the company's earnings release has 72% expecting an earnings beat The company's guidance was for revenue of $945.00 million to $975.00 million. Consensus estimates are for year-over-year earnings growth of 26.92% with revenue increasing by 7.28%. Short interest has increased by 66.6% since the company's last earnings release while the stock has drifted lower by 9.5% from its open following the earnings release to be 11.0% below its 200 day moving average of $39.32. Overall earnings estimates have been revised higher since the company's last earnings release. On Friday, December 6, 2019 there was some notable buying of 1,156 contracts of the $36.00 put expiring on Friday, December 13, 2019. Option traders are pricing in a 9.0% move on earnings and the stock has averaged a 10.1% move in recent quarters.

(CLICK HERE FOR THE CHART!)

MongoDB, Inc. $131.17

MongoDB, Inc. (MDB) is confirmed to report earnings at approximately 4:05 PM ET on Monday, December 9, 2019. The consensus estimate is for a loss of $0.28 per share on revenue of $99.73 million and the Earnings Whisper ® number is ($0.26) per share. Investor sentiment going into the company's earnings release has 63% expecting an earnings beat The company's guidance was for a loss of $0.29 to $0.27 per share on revenue of $98.00 million to $100.00 million. Consensus estimates are for year-over-year earnings growth of 15.15% with revenue increasing by 53.47%. Short interest has increased by 15.2% since the company's last earnings release while the stock has drifted lower by 16.3% from its open following the earnings release to be 5.1% below its 200 day moving average of $138.19. Overall earnings estimates have been revised lower since the company's last earnings release. On Tuesday, November 19, 2019 there was some notable buying of 970 contracts of the $210.00 call expiring on Friday, December 20, 2019. Option traders are pricing in a 10.1% move on earnings and the stock has averaged a 8.7% move in recent quarters.

(CLICK HERE FOR THE CHART!)

Chewy, Inc. $24.95

Chewy, Inc. (CHWY) is confirmed to report earnings at approximately 4:10 PM ET on Monday, December 9, 2019. The consensus estimate is for a loss of $0.16 per share on revenue of $1.21 billion and the Earnings Whisper ® number is ($0.15) per share. Investor sentiment going into the company's earnings release has 57% expecting an earnings beat. Short interest has increased by 40.7% since the company's last earnings release while the stock has drifted lower by 14.6% from its open following the earnings release. Overall earnings estimates have been revised lower since the company's last earnings release. The stock has averaged a 6.4% move on earnings in recent quarters.

(CLICK HERE FOR THE CHART!)

Stitch Fix, Inc. $24.09

Stitch Fix, Inc. (SFIX) is confirmed to report earnings at approximately 4:05 PM ET on Monday, December 9, 2019. The consensus estimate is for a loss of $0.06 per share on revenue of $441.04 million and the Earnings Whisper ® number is ($0.04) per share. Investor sentiment going into the company's earnings release has 69% expecting an earnings beat The company's guidance was for revenue of $438.00 million to $442.00 million. Consensus estimates are for earnings to decline year-over-year by 160.00% with revenue increasing by 20.43%. Short interest has increased by 30.9% since the company's last earnings release while the stock has drifted higher by 41.7% from its open following the earnings release to be 2.4% below its 200 day moving average of $24.69. Overall earnings estimates have been revised lower since the company's last earnings release. On Thursday, November 21, 2019 there was some notable buying of 1,000 contracts of the $13.00 put expiring on Friday, January 17, 2020. Option traders are pricing in a 20.0% move on earnings and the stock has averaged a 18.9% move in recent quarters.

(CLICK HERE FOR THE CHART!)

DISCUSS!

What are you all watching for in this upcoming trading week?
I hope you all have a wonderful weekend and a great trading week ahead wallstreetbets.
submitted by bigbear0083 to wallstreetbets [link] [comments]

Two Best indicators for Trading Binary options & Forex ... The Most Profitable Forex Indicators Lagging VS Leading ... Best Binary Option Auto Signal Indicator// Attach With ... Best 5 Minutes Binary Options Strategy 2020 - The BLW 5 ... Binary options trading strategy  4000$ for 1 hour - YouTube Best Binary Options Strategy 2020 - 2 Minute Strategy LIVE ... BINARY OPTIONS STRATEGY - 80% WINS  500$ in 10 minutes ...

Best Leading Indicators For Forex And Stock Market 5. Ichimoku Indicator. An Ichimoku chart, developed by Goichi Hosoda, represents a trend-following system with an indicator similar to moving averages. Ichimoku is one of the trading indicators that predicts price movement and not only measures it. The advantage of the indicator is the fact that offers a unique perspective of support and resis Strategy is a key element of long term successful binary options trading. The best binary trading strategies can be defined as: A method or signal which consistently makes a profit.Some strategies might focus on expiry times, like 60 second, 1 hour or end of day trades, others might use a particular system (like Martingale) or technical indicators like moving averages, Bollinger bands or ... You can trade binary options without technical indicators and rely on the news. The benefit of the news is that it’s relatively straightforward to understand and use. You’ll need to look for company announcements, such as the release of financial reports. Alternatively, look for more global news that could impact an entire market, such as a move away from fossil fuels. Small announcements ... Here is our tried and tested list of the Top 10 best performing non-repainting Forex indicators for MT4 that actually work. This list will be updated every six months with new indicators added to the list so feel free to submit your suggestions and indicators to our staff for review by posting your suggestion up on either one of our Social Media pages: Twitter and Facebook. The put-call ratio measures trading volume using put options versus call options. Instead of the absolute value of the put-call ratio, the changes in its value indicate a change in overall market ... Exchange versus OTC (Over the Counter) Brokers; Exchange Brokers; OTC (Over The Counter) Brokers; Payment Methods ; Read More; We have compared the best regulated binary options brokers and platforms in November 2020 and created this top list. Every broker and platform has been personally reviewed by us to help you find the best binary options platform for both beginners and experts. The ... Knowing what indicators to use and what is the best combination of technical indicators can dramatically improve your chart reading skills. If you use the wrong technical indicators, this can lead to inaccurate price interpretation and subsequently to bad trading decisions. Our team at Trading Strategy Guides has meticulously hand-picked every technical indicator so it… Opções Tradicionais Versus Binários Opções de negociação é visto por muitas pessoas como uma maneira segura de especular sobre os preços do... Stacking the odds – combining the best indicators in a meaningful way. Now comes the interesting part. The screenshot below shows a chart with three different indicators that support and complement each other. The RSI measures and identifies momentum plays, the ADX finds trends and the Bollinger Bands measure volatility. Note here that we do not use the Bollinger Bands as a trend indicator ... Indicators are an essential part of any good binary options trader’s toolbox. By using indicators effectively, you will be giving yourself a large advantage over people who trade based solely upon the feel of an underlying asset.While these traders might be right, sometimes even more than 50 percent of the time, they are not using one of the best and most effective tools that currently exist ...

[index] [4873] [20116] [20124] [15653] [3535] [1534] [2670] [2459] [20796] [2100]

Two Best indicators for Trading Binary options & Forex ...

Best Binary Options Brokers for this Strategy: 1. 💲💹IQ Option FREE DEMO: http://www.cryptobinarylivingway.com/IQOption1 2. 💲💹Pocket Option FREE DEMO: http BINARY OPTIONS STRATEGY - 80% WINS 500$ in 10 minutes♛ POCKET OPTION - http://pocketopttion.com♛ BINOMO - https://qps.ru/2lUo5♛ TO RECEIVE BINARY OPTIONS SI... 💰💲FULL BEGINNER? Join My PERSONAL TRAINING!💴💵 BLW Trading Academy: http://www.blwtradingacademy.com/ 💲💹Official FREE Telegram Group: https://t.me ... Hello everyone!:) My name is Anastasia, but it's too hard to pronounce, that's why you may call me just ANA. I'm a pro trader for more than 2 years already a... Hello Trader Toady i will share you "Best Binary Option Auto Signal Indicator" Characteristics of Indicator 1. Platform - Metatrader4. 2. Asset - Show On Ind... https://www.dittotrade.com/ Hello and Welcome to the ditto educational series that will provide you with the skills you need to become a Forex trader! Today ... https://binaryoptionsbeat.com/ You can contact us for more information about intensive courses, Signal service and workshop via: [email protected] Li...

http://binary-optiontrade.rewawonhyve.tk