Lisp is Forth where "(" pushes next word onto separate execution stack and ")" executes it. Hence (+ 1 2) is equal to 1 2 +. I made this discovery maybe 1975, but Forth was not yet invented, or rather Internet was not yet invented, so I did not know about Forth. So it was a property of my very own Symbolic Stack Machine on Nova 1200.
This is an awesome project! I think writing a bootstrapping Lisp is probably one of the best uses for a Forth.
I was surprised that they said, "One of the more involved parts of this interpreter is the reader, where I had to do quite a lot of stack juggling to keep everything in line", and I think I can offer some useful pointers not only for the original author but also for anyone else who decides to go off and write stuff in Forth even though it's 02021.
My READ is 14 lines of Forth, and even accounting for the more horizontal (not to say cramped) layout of my code and its lack of symbol support, I think it's still significantly simpler and more readable than the 60 or so lines of Forth used here. Contrast:
: lisp-skip-ws ( e a -- e a )
lisp-read-char
begin
dup 0<> over lisp-is-ws and
while
drop lisp-read-char
repeat
0<> if
lisp-unread-char
endif ;
with (slightly reformatted)
: wsp begin peek bl = while getc drop repeat ;
There are four simplifications here:
1. My definition of "whitespace" is just "equal to the space character BL". Arguably this is cheating, but it's a small difference.
2. I'm handling EOF with an exception inside PEEK, rather than an extra conditional case in every function that calls PEEK; this implies you have to discard whitespace before your tokens rather than after them, but that's what both versions are doing anyway.
3. I'm using a high-level word PEEK to represent the parser-level concept of "examine the next character without consuming it" rather than the implementation-level concept "dup 0<> over". This is facilitated by putting the state of the input stream into the VALUEs READP and READEND instead of trying to keep it on the stack, which would have given me a headache and wasted a lot of my time debugging. PEEK and GETC can always be called regardless of what's on the stack, while LISP-READ-CHAR only works at the beginning of an "expression".
4. The choice of the PEEK/GETC interface instead of GETC/UNGETC is also a very slight improvement. It would be less of a difference if LISP-UNREAD-CHAR were capable of unreading an EOF, but in general to the extent that you can design your internal interfaces to avoid making temporary side effects you must undo later, you will have less bugs from forgetting to undo them.
In other parts of the code the situation is somewhat worse. Consider the mental gymnastics needed to keep track of all the stack state in this word:
: lisp-read-token ( e a -- e a a u )
lisp-skip-ws
0 >r
lisp-read-char
begin
dup [char] ) <> over 0<> and over lisp-is-ws 0= and
while
token-buffer r@ + c! r> 1+ >r lisp-read-char
repeat
0<> if
lisp-unread-char
endif
token-buffer r> ;
I didn't have a separate tokenizer except for READ-NUM, because all my other tokens were parentheses. But contrast:
: (read-num) 0 begin eod? if exit then
peek [char] - = if -1 to (sign) getc drop
else peek isdigit if getc digit else exit then then again ;
\ That took me like half an hour to debug because I was confusing char
\ and [char].
: read-num 1 to (sign) (read-num) (sign) * int2sex ;
Mine is not beautiful code by any stretch of the imagination. But contrast PEEK ISDIGIT IF GETC DIGIT ELSE EXIT THEN — in popular infix syntax, that would be if (isdigit(peek()) then digit(getc()) else return — with TOKEN-BUFFER R@ + C! R> 1+ >R LISP-READ-CHAR! Imagine all the mental effort needed to keep track of all those stack items! Avoid making things harder for yourself that way; as Kernighan and Plauger famously said, debugging is twice as hard as writing the code in the first place, so if you write the code as cleverly as you can, how will you ever debug it? You can define words to build up a token and write your token reader in terms of them:
create token-buffer 128 allot token-buffer value tokp
: token-length tokp token-buffer - ;
: new-token token-buffer to tokp ; : token-char tokp c! tokp 1+ to tokp ;
Or similar variations. Either way, with this approach, you don't have to keep track of where your token buffer pointer (or length) is; it's always in tokp (or token-length), not sometimes on the top of stack and sometimes on the top of the return stack.
In this case the code doesn't get shorter (untested):
: lisp-read-token ( e a -- e a )
lisp-skip-ws
new-token
lisp-read-char
begin
dup [char] ) <> over 0<> and over lisp-is-ws 0= and
while
token-char lisp-read-char
repeat
0<> if
lisp-unread-char
endif ;
but it does get a lot simpler. You don't have to wonder what "0 >R" at the beginning of the word is for or decipher R@ + C! R> 1+ >R in the middle. You no longer have four items on the stack at the end of the word to confuse you when you're trying to understand lisp-read-token's caller. And now you can test TOKEN-CHAR interactively, which is helpful for making sure your stack effects are right so you don't have to debug stack-effect errors later on (this is an excerpt from an interactive Forth session):
: token-type token-buffer token-length type ; ok
token-type ok
char x token-char token-type x ok
char y token-char token-type xy ok
bl token-char token-type xy ok
.s <0> ok
new-token token-type ok
char z token-char token-type z ok
This is an illustration of a general problem that afflicted me greatly in my first years in Forth: just because you can keep all your data on the stack (a two-stack machine is obviously able to emulate a Turing machine) doesn't mean you should. The operand stack is for expressions, not for variables. Use VARIABLEs. Or VALUEs, if you prefer. Divide your code into "statements" between which the stack is empty (except for whatever the caller is keeping there). Completely abjure stack operations except DROP: no SWAP, no OVER, and definitely no ROT, NIP, or TUCK. Not even DUP. Then, once your code is working, maaaybe go back and put one variable on the operand stack, with the appropriate stack ops. But only if it makes the code more readable and debuggable instead of less. And maaaybe another variable on the return stack, although keep in mind that this impedes factoring — any word you factor out of the word that does the return-stack manipulation will be denied access to that variable.
Think of things like SWAP and OVER as the data equivalents of a GO TO statement: they can shorten your code, sometimes even simplify it, but they can also tremendously impede understandability and debuggability. They easily create spaghetti dataflow.
Failure to observe this practice is responsible for most of the difficulty I had in my first several years of Forth, and also, I think, most of the difficulty schani reports, and maybe most of the difficulty most programmers have in Forth. If you can figure out how you would have written something in a pop infix language, you can write it mechanically in Forth without any stack juggling (except DROP). For example:
v := f(x[i], y * 3);
if (frob(v)) then (x[j], y) := warp(x[j], 37 - y);
becomes something like this, depending on the particular types of things:
i x @ y c@ 3 * f v !
v @ frob if j x @ 37 y @ - warp j x ! y c! then
Now, maybe you can do better than the mechanical translation of the infix syntax in a particular case — in this case, maybe it would be an improvement to rewrite "v ! v @" to "dup v !", or maybe not — but there's no need to do worse.
This is not to diminish schani's achievements with forthlisp, which remains wonderful! I haven't ever managed to write a Lisp in Forth myself, despite obviously feeling the temptation, just in C and Lua. Code that has already been written is far superior to code that does not exist.
But, if they choose to pursue it further, hopefully the fruits of my suffering outlined above will nourish them on their path, and anyone else who reads this.
I'm the original author. Thank you for your explanations! I added a link to this post to the README.
I believe the reason I did all the stack juggling was because I wanted to write it "the Forth way", or maybe the "pure stack-based way", and using variables seemed like cheating.
I certainly won't be pursuing this further (I wrote it 20 years ago as a programming exercise), but I hope somebody will learn from your exposition :-)
I think "using variables seemed like cheating" was a lot of my motivation, too, and it led me into a great deal of mischief. Despite what I thought at first, I think "the Forth way" does use variables pretty often, although I guess different people's "Forth way" is different. But consider Chuck Moore's Forth Way:
> A Forth word should not have more than one or two arguments. This stack which people have so much trouble manipulating should never be more than three or four deep. ... But as to stack parameters, the stacks should be shallow. On the i21 we have an on-chip stack 18 deep. This size was chosen as a number effectively infinite.
> The words that manipulate that stack are DUP, DROP and OVER period. There's no ..., well SWAP is very convenient and you want it, but it isn't a machine instruction. But no PICK[,] no ROLL, none of the complex operators to let you index down into the stack. This is the only part of the stack, these first two elements, that you have any business worrying about.
> The others are on the stack because you put them there and you are going to use them later after the stack falls back to their position. They are not there because [you're] using them now. You don't want too many of those things on the stack because you are going to forget what they are.
> So people who draw stack diagrams or pictures of things on the stack should immediately realize that they are doing something wrong. Even the little parameter pictures that are so popular. You know if you are defining a word and then you put in a comment showing what the stack effects are
and it indicates F and x and y
F ( x - y )
> I used to appreciate this back in the days when I let my stacks get too complicated, but no more. We don't need this kind of information.
I was trying to find the "sheesh, just use a variable" quote I seem to remember from him, but I can't find it. Maybe I'm inadvertently attributing my own ideas to him. But if you look at his code (there are some excerpts in http://www.ultratechnology.com/fsc98.htm and http://www.ultratechnology.com/tape1.htm) you'll see he's pretty sparing with stack operations and uses variables (in memory) pretty regularly.
Certainly my recommendation here—start with statements and expressions, use lots of variables—differs from, say, Jeff Fox's recommendation. And I'm pretty sure Jeff Fox was a better Forth programmer than I am. And I think it's common that, with enough thought, you can find a better way to design the code that reduces the amount of state you have to keep at memory addresses. But I think a programmer already experienced in another language is much more likely to shoot herself in the foot in Forth by using too many stack operations and too many values on the stack, than by using too many variables, so I think it's probably a better learning path.
(FWIW, I think the advice to not write stack comments is probably bad advice, even though Chuck Moore was and probably is a much better Forth programmer than I am.)
Also, though, and I feel like I should have emphasized this more from the outset, I have never shipped code to users in Forth. In fact, I don't even have any personal utility programs written in Forth. That's because I still find Forth hard to read, write, and debug, despite being fascinated with it for 25 years. I think my motivation for writing the above readprint.fs code was to sort of see how terrible I was at writing Forth (answer: it took me 2 hours to write 30 lines of code, so still pretty terrible, but at least I did manage to write a working parser.) So please take my opinions on the matter with a grain of salt.
The Lisp adage about '<something, something> 100 functions with 12 data types vs <something, something>' seems to relate treating the argument stack as the tuple so that you can have multiple functions that take the var-arg union of the shape of the stack. I don't think I am explaining it very well, but I think this is the gist of how one constructs algebras or monoids over the shape of the stack.
Perlis's epigram: "It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures."
It's not strictly about Lisp; Perlis was fond of Lisp but his true love was APL. But you could use it to advocate JSON, bytestream shell pipelines, or even TCP/IP. Or a flat byte-addressable memory, I suppose, like Forth or amd64.
Are you thinking of something like the static typing system of Christopher Diggins's "Cat" language, or its children Kitten and Mlatu?
Heh, the kragen[hashmap] comes through. Thanks for the quote.
While those are interesting, probably in the same way that Shen is interesting, I was thinking more along the lines that stack is an open ended product type (I made that up) and that operations on the stack are like a zipper, map, fold, product. That there is a projectional aspect to the stack, its expansion and contraction and shape over time.
The engines that do protein folding feel like they have similarities.
I still haven't grokked your whole description of your Lisp reader, I'll have to sleep on it. Is it related in structure to the METAII meta compiler or parser combinators?
That sounds like the insight cdiggins based Cat's type system on, but I'm not entirely sure in part because I don't really understand Cat's type system. As an example, though, for the code
popop = { pop pop}
Cat infers the parametrically polymorphic typing judgment
popop : ('t0 't1 't2 -> 't2)
where 't2 is the type of the rest of the stack, I guess, and juxtaposition of types is a sort of product operation (noncommutative, but I think associative, and thus perhaps a monoid).
Not sure what you mean about "projectional" — do you mean that "pop" is a "projection" in the relational sense, in that it maps, for example, the stack 3 8 1 (with the top on the left) and the stack 7 8 1 to the same resulting stack state 8 1?
I don't know anything about protein folding.
> I still haven't grokked your whole description of your Lisp reader, I'll have to sleep on it. Is it related in structure to the METAII meta compiler or parser combinators?
Well, I didn't really write much of a description of the reader. It's a recursive-descent predictive parser, same as Probst's reader, and probably most Lisp readers. READ sets up the input pointers and calls (READ), which is ((READ)), which discriminates between lists and atoms (which are numbers) by looking for a "(". If it doesn't find one, it calls READ-NUM to read a number. But if it does, it calls READ-TAIL, which recursively reads the list contents (by calling (READ)) until it finds a ")", then returns back up, consing up the list as it goes. Probst's code works the same way, with the correspondences READ ↔ LISP-LOAD-FROM-STRING, (READ) ↔ LISP-READ-LISP, ((READ)) ↔ _LISP-READ-LISP, READ-NUM ↔ LISP-READ-TOKEN/-NUMBER/-SYMBOL, and READ-TAIL ↔ LISP-READ-LIST.
META II has the similarity that it generates recursive-descent parsers with predictive parsing, but the dissimilarity that it's a domain-specific language for writing parsers. Parser combinators are a technique for embedding any domain-specific parsing language in a general-purpose host language, regardless of what parsing algorithm is used, though Packrat may be the most common choice, and Packrat has certain similarities to recursive-descent parsing.
You could follow Lisp's car/caar/caaar cdr/cddr/cdddr cdadar/cadadar/caaddaar naming conventions in Forth ("waiting for the other shoe to drop") or PostScript ("waiting for the other shoe to pop"):
Forth:
: droop drop drop ;
: drooop drop drop drop ;
: droooop drop drop drop drop ;
PostScript:
/poop { pop pop } def
/pooop { pop pop pop } def
/poooop { pop pop pop pop } def
This is interesting from the standpoint of the different mentality of Forth coders. If they really needed a fast removal of 4 stack items they would first code what you did.
Then after things were working they might replace droooop with something like this:
( Forth assembler psuedo-code follows)
code droooop sp 4 cells addi, next, endcode
One instruction to move the CPU stack pointer.
Unthinkable to touch the stack pointer in most other environments but Toto we're not in Kansas. :)
*Next, is the traditional name of the "return to Forth function" in threaded Forth. A return instruction would be used in a native code Forth. Carnal knowledge of the internals is required and used by Forth coders.
In a lot of FORTH implementations, constant numbers like 0, 1, -1 and others are hard coded, not just for speed but also for space: a call to a code word only takes one cell, instead of using two cells with a LIT [value].
Here's some Breshenham line code I first wrote in FORTH (Mitch Bradley's SunForth on a 68k Sun-2 with cg2 graphics board), then translated to 68k code (using the rpn FORTH assembler) -- the FORTH code is commented out before the corresponding assembly code:
The cg2 board wasn't directly memory mapped -- it had a really weird way of selecting and accessing one row and one column at a time, which was kinda convenient for drawing lines and curves, and ropping blocks around, but nowhere as convenient as direct memory mapped access would have been.
* To save on virtual address space, we don't map the plane or pixel mode
* memory (first two megabytes). However, when calling mmap the user has
* to add 2M to the desired offset anyway (goofy, huh?).
I never had a cgtwo, myself (because I never used a Sun2). Why do you suppose they used that weird "bank"-switching scheme? It occupied 4MiB of address space anyway!
Why were you in 640×480? I thought the cgtwo was 1152×900 like God intended, and that's what the .h says too.
Was the reason it was important to save virtual address space important that the 68010 ignored the upper 8 bits of virtual addresses? (Helllo, pointer typetag...)
The assembler looks pretty pleasant, though all the 68k operand size suf-, uh, prefixes make the code a bit LONGer than it could be. In gas I really miss having a macro system that can express nested control structures (so I guess I should quit my bitching and write one and use it). I suppose the tests for the IF and WHILE are limited to <, =, <>, >, 0<, 0>, and 0<>?
I'm curious what you think of my analogy upthread between stack-manipulation words and goto. Does it reflect your experience? I'd forgotten you'd done a bunch of Forth stuff.
The wonderful thing about FORTH assemblers is that you have the full power of FORTH to write macros and code generation code with!
That particular assembler had structured control flow like if, while, etc.
It might have actually been a cgone, since the device name was /dev/cgone0. But the header file said cg2. Whatever it was, it was quite slow!
Years later, John Gilmore mentioned that he wrote that .h file with the C structures/unions that mapped out all the device registers.
I bought a copy of Aviator by Curt Priem and Bruce Factor, that ran on my SS2 pizzabox's GX "LEGO: Low End Graphics Option" SBus card (an 8 bit color
+ 2 bits monochrome overlay plane graphics accelerator):
>AVIATOR 1.5 FOR SUN NETWORKS OPENS UP GRAPHICS WORKSTATION GAMES MARKET. By CBR Staff Writer, 08 Jul 1991.
Not sure why the memory mapping was so weird -- but at least it wasn't as bizarre as the Apple ][! It did have some weird undocumented limitations, like you could only write to the colormap during vertical retrace (which I discovered the hard way -- it didn't seem to work for no apparent reason, except for the occasional times when it did kinda work).
Here's a reference to the cgone device that sounds about right:
* SUN120 A Sun Microsystems workstation, model Sun2/120 with
* a separate colorboard (/dev/cgone0) and the
* Sun optical mouse. Also works on some old Sun1s with
* the 'Sun2 brain transplant'.
Frame Buffer History Lesson
Last Updated: 24th November 1998
cg1/bw1: device name : "/dev/cgoneX" "/dev/bwoneX"
The color and monochrome framebuffer of sun100u.
It is not a crime not knowing anything about these. (and this was 7 years ago!)
If you were to implement such a PostScript-based programming language for a Racal PDP-11 clone (or an HP calculator, or a P-code machine), whether in NoCal or in SoCal, I think you'd have to call it FECAL.
If you really wanted to provide such a set of operators for, say, the top N stack items, you could give them systematic names with some distinctive scheme; limiting such stack operators to the top 3, for example, with no more than 2 extra results, you could provide the operators x→ (drop), x→x (nop), x→xx (bad Mexican beer), x→xxx (dupup),
xy→ (2drop), xy→x (nip), xy→y (drop again, for consistency), xy→xx (nip dup), xy→xy (nop), xy→yx (exch†), xy→yy (drop dup), xy→xxx (nip dup dup), xy→xxy (dup again), xy→xyx (tuck), xy→xyy, xy→yxx, xy→yxy, xy→yyx, xy→yyy, xy→xxxx, xy→xxxy, xy→xxyx, xy→xxyy, xy→xyxx, xy→xyxy, xy→xyyx, xy→xyyy, xy→yxxx, xy→yxxy, xy→yxyx, xy→yxyy, xy→yyxx, xy→yyxy, xy→yyyx, xy→yyyy,
xyz→, xyz→x, xyz→y, xyz→z, xyz→xx, xyz→xy (condescending answer on Stack Overflow), xyz→xz, xyz→yx, xyz→yy, xyz→yz, xyz→zx, xyz→zy, xyz→zz, xyz→xxx (programmers over 18 only), xyz→xxy (Klinefelter syndrome), xyz→xxz, xyz→xyx, xyz→xyy (Jacobs syndrome), xyz→xyz, xyz→xzx, xyz→xzy, xyz→xzz, xyz→yxx, xyz→yxy, xyz→yxz, xyz→yyx, xyz→yyy (bargaining, denial), xyz→yyz, xyz→yzx, xyz→yzy, xyz→yzz, xyz→zxx, xyz→zxy, xyz→zxz, xyz→zyx, xyz→zyy, xyz→zyz, xyz→zzx, xyz→zzy, xyz→zzz, xyz→xxxx, xyz→xxxy, xyz→xxxz, xyz→xxyx, xyz→xxyy, xyz→xxyz, xyz→xxzx, xyz→xxzy, xyz→xxzz, xyz→xyxx, xyz→xyxy, xyz→xyxz, xyz→xyyx, xyz→xyyy, xyz→xyyz, xyz→xyzx, xyz→xyzy, xyz→xyzz, xyz→xzxx, xyz→xzxy, xyz→xzxz, xyz→xzyx, xyz→xzyy, xyz→xzyz, xyz→xzzx, xyz→xzzy, xyz→xzzz, xyz→yxxx, xyz→yxxy, xyz→yxxz, xyz→yxyx, xyz→yxyy, xyz→yxyz, xyz→yxzx, xyz→yxzy, xyz→yxzz, xyz→yyxx, xyz→yyxy, xyz→yyxz, xyz→yyyx, xyz→yyyy, xyz→yyyz, xyz→yyzx, xyz→yyzy, xyz→yyzz, xyz→yzxx, xyz→yzxy, xyz→yzxz, xyz→yzyx, xyz→yzyy, xyz→yzyz, xyz→yzzx, xyz→yzzy, xyz→yzzz, xyz→zxxx, xyz→zxxy, xyz→zxxz, xyz→zxyx, xyz→zxyy, xyz→zxyz, xyz→zxzx, xyz→zxzy, xyz→zxzz, xyz→zyxx, xyz→zyxy, xyz→zyxz, xyz→zyyx, xyz→zyyy, xyz→zyyz, xyz→zyzx, xyz→zyzy, xyz→zyzz, xyz→zzxx, xyz→zzxy, xyz→zzxz, xyz→zzyx, xyz→zzyy, xyz→zzyz, xyz→zzzx, xyz→zzzy, xyz→zzzz (sleep 4), xyz→xxxxx, xyz→xxxxy, xyz→xxxxz, xyz→xxxyx, xyz→xxxyy, xyz→xxxyz, xyz→xxxzx, xyz→xxxzy, xyz→xxxzz, xyz→xxyxx, xyz→xxyxy, xyz→xxyxz, xyz→xxyyx, xyz→xxyyy, xyz→xxyyz, xyz→xxyzx, xyz→xxyzy, xyz→xxyzz, xyz→xxzxx, xyz→xxzxy, xyz→xxzxz, xyz→xxzyx, xyz→xxzyy, xyz→xxzyz, xyz→xxzzx, xyz→xxzzy, xyz→xxzzz, xyz→xyxxx, xyz→xyxxy, xyz→xyxxz, xyz→xyxyx, xyz→xyxyy, xyz→xyxyz, xyz→xyxzx, xyz→xyxzy, xyz→xyxzz, xyz→xyyxx, xyz→xyyxy, xyz→xyyxz, xyz→xyyyx, xyz→xyyyy, xyz→xyyyz, xyz→xyyzx, xyz→xyyzy, xyz→xyyzz, xyz→xyzxx, xyz→xyzxy, xyz→xyzxz, xyz→xyzyx, xyz→xyzyy, xyz→xyzyz, xyz→xyzzx, xyz→xyzzy (Nothing happens), xyz→xyzzz, xyz→xzxxx, xyz→xzxxy, xyz→xzxxz, xyz→xzxyx, xyz→xzxyy, xyz→xzxyz, xyz→xzxzx, xyz→xzxzy, xyz→xzxzz, xyz→xzyxx, xyz→xzyxy, xyz→xzyxz, xyz→xzyyx, xyz→xzyyy, xyz→xzyyz, xyz→xzyzx, xyz→xzyzy, xyz→xzyzz, xyz→xzzxx, xyz→xzzxy, xyz→xzzxz, xyz→xzzyx, xyz→xzzyy, xyz→xzzyz, xyz→xzzzx, xyz→xzzzy, xyz→xzzzz, xyz→yxxxx, xyz→yxxxy, xyz→yxxxz, xyz→yxxyx, xyz→yxxyy, xyz→yxxyz, xyz→yxxzx, xyz→yxxzy, xyz→yxxzz, xyz→yxyxx, xyz→yxyxy, xyz→yxyxz, xyz→yxyyx, xyz→yxyyy, xyz→yxyyz, xyz→yxyzx, xyz→yxyzy, xyz→yxyzz, xyz→yxzxx, xyz→yxzxy, xyz→yxzxz, xyz→yxzyx, xyz→yxzyy, xyz→yxzyz, xyz→yxzzx, xyz→yxzzy, xyz→yxzzz, xyz→yyxxx, xyz→yyxxy, xyz→yyxxz, xyz→yyxyx, xyz→yyxyy, xyz→yyxyz, xyz→yyxzx, xyz→yyxzy, xyz→yyxzz, xyz→yyyxx, xyz→yyyxy, xyz→yyyxz, xyz→yyyyx, xyz→yyyyy, xyz→yyyyz, xyz→yyyzx, xyz→yyyzy, xyz→yyyzz, xyz→yyzxx, xyz→yyzxy, xyz→yyzxz, xyz→yyzyx, xyz→yyzyy, xyz→yyzyz, xyz→yyzzx, xyz→yyzzy, xyz→yyzzz, xyz→yzxxx, xyz→yzxxy, xyz→yzxxz, xyz→yzxyx, xyz→yzxyy, xyz→yzxyz, xyz→yzxzx, xyz→yzxzy, xyz→yzxzz, xyz→yzyxx, xyz→yzyxy, xyz→yzyxz, xyz→yzyyx, xyz→yzyyy, xyz→yzyyz, xyz→yzyzx, xyz→yzyzy, xyz→yzyzz, xyz→yzzxx, xyz→yzzxy, xyz→yzzxz, xyz→yzzyx, xyz→yzzyy, xyz→yzzyz, xyz→yzzzx, xyz→yzzzy, xyz→yzzzz, xyz→zxxxx, xyz→zxxxy, xyz→zxxxz, xyz→zxxyx, xyz→zxxyy, xyz→zxxyz, xyz→zxxzx, xyz→zxxzy, xyz→zxxzz, xyz→zxyxx, xyz→zxyxy, xyz→zxyxz, xyz→zxyyx, xyz→zxyyy, xyz→zxyyz, xyz→zxyzx, xyz→zxyzy, xyz→zxyzz, xyz→zxzxx, xyz→zxzxy, xyz→zxzxz, xyz→zxzyx, xyz→zxzyy, xyz→zxzyz, xyz→zxzzx, xyz→zxzzy, xyz→zxzzz, xyz→zyxxx, xyz→zyxxy, xyz→zyxxz, xyz→zyxyx, xyz→zyxyy, xyz→zyxyz, xyz→zyxzx, xyz→zyxzy, xyz→zyxzz, xyz→zyyxx, xyz→zyyxy, xyz→zyyxz, xyz→zyyyx, xyz→zyyyy, xyz→zyyyz, xyz→zyyzx, xyz→zyyzy, xyz→zyyzz, xyz→zyzxx, xyz→zyzxy, xyz→zyzxz, xyz→zyzyx, xyz→zyzyy, xyz→zyzyz, xyz→zyzzx, xyz→zyzzy, xyz→zyzzz, xyz→zzxxx, xyz→zzxxy, xyz→zzxxz, xyz→zzxyx, xyz→zzxyy, xyz→zzxyz, xyz→zzxzx, xyz→zzxzy, xyz→zzxzz, xyz→zzyxx, xyz→zzyxy, xyz→zzyxz, xyz→zzyyx, xyz→zzyyy, xyz→zzyyz, xyz→zzyzx (Soda Springs), xyz→zzyzy, xyz→zzyzz, xyz→zzzxx, xyz→zzzxy, xyz→zzzxz, xyz→zzzyx, xyz→zzzyy, xyz→zzzyz, xyz→zzzzx, xyz→zzzzy, and xyz→zzzzz. You could certainly argue about the utility of many of these operators individually, not to say their mental risk as attractive nuisances, but their mnemonic value is indisputable.
______
† Where do you get an old PostScript printer? At an exch meet.
Hella Nor Cal or Totally So Cal?: The Perceptual Dialectology of California
Mary Bucholtz, Nancy Bermudez, Victor Fung, Lisa Edwards and Rosalva Vargas. Journal of English Linguistics 2007; 35; 325. DOI: 10.1177/0075424207307780
>Abstract
>This study provides the first detailed account of perceptual dialectology within California (as well as one of the first accounts of perceptual dialectology within any single state). Quantitative analysis of a map-labeling task carried out in Southern California reveals that California’s most salient linguistic boundary is between the northern and southern regions of the state. Whereas studies of the perceptual dialectology of the United States as a whole have focused almost exclusively on regional dialect differences, respondents associated particular regions of California less with distinctive dialects than with differences in language (English versus Spanish), slang use, and social groups. The diverse socio linguistic situation of California is reflected in the emphasis both on highly salient social groups thought to be stereotypical of California by residents and nonresidents alike (e.g., surfers) and on groups that, though prominent in the cultural landscape of the state, remain largely unrecognized by outsiders (e.g., hicks).
Extra credit question:
Can you locate the isogloss designating the "101" / "The 101" line?
Yeah, some circles. It's guys getting to middle-ages or retirement age, having more time on their hands, feeling a bit nostalgic and are tinkering. That's fine, I do have a weak spot for Lisp and FORTH myself, but don't confuse that with applicability in the industry.
Small is the new big. Not being serious but frameworks took the joy out of programming in a dogmatic way but lost a lot of the beauty and creativity in coding in exchange of too many features that a lot of the time we don’t need. People started looking at the past and what has been overlooked by many and started to bring it back. I am glad I starated learning Lisps and Scheme in particular. I don’t know why I like it so much but I am thankful people bring old things in fashion, even if for personal growth. My ignorance built upon the general consensus (what I thought of at the time) was that Lisp was a forget about proposition, that parantheses make it a horrible coding experience which couldn’t be further from the truth. I think i prefer this syntax now.
Keep up exploring and sharing folks, you are making a dent.
People are just poking around with it because its a mature language thats been used for a lot of embedded things.
Its been in several bootloaders as well.
The FreeBSD bootloader uses it so I technically use it any time I boot my machine heh.
Imagine being able to run LISP in the bootrom of a random machine :)
Because OpenFirmware, the more or less standard bootloader of PowerPC and Sparc specified forth as it's command language. Even let you write PCI option ROMs in forth to have ISA independent boot drivers.
Open Firmware written by Mitch Bradley is actually just a byte-code Forth custom extended to be the system firmware.
:)
So it's not that the language was specified but rather the language is the Firmware. By using byte-codes for the Forth keywords and allowing new wordlists to be created and extended you get a nice virtual machine that has the low-level chops to let you write device drivers and such, as you mentioned, while keeping the object code very small.
It quite clever. Unfortunately Forth itself is so "other" that it can never win the popularity contest. However like Lisp it is one of those mind expanding paradigms that should be sampled for the insights into minimalist computing that it provides.
Don't forget to grab a programmer. I use a HiLetgo USBTiny programmer. They go for about $8. Also, do some googling about how to work with Forth on the Arduino. Not too much out there but it should help round out the FlashForth docs.
arduino-forth.com is a good resource as well.
It allows you to burn the FlashForth bootloader on an Arduino, or to re-burn the Arduino bootloader if you'd like to program your Arduino with C/C++ instead of Forth.
This blog has a good walkthrough of burning the FlashForth bootloader:
In addition to arduino-forth.com, definitely check out Starting Forth by Leo Brodie. It's a great book. Probably one of the best technical books out there. You can read it for free online. Sadly, it is out of print. You can find used copies though.
Once again, thanks for the links! I don’t have too much time right now, but I’ll definitely have to explore this when I get an opportunity.
> Starting Forth by Leo Brodie
Funny you mention this… I found it myself, and am most of the way through it already! I’m probably not going through it as thoroughly as it deserves, but it is indeed an excellent introduction.
One of the circles that I am in, people have been realistically considering Forth for a bootstrapper; it's very easy to build up the rest of Forth using Forth once you have a minimal set of keywords implemented in assembly for your hardware of choice.
Not this, but I wanted to make a Scheme VM and expose it directly, forth-like. A nice benefit would be a memory safe forth with advanced data types and GC.
EDIT: to clarify, the point of this of course is to write only the minimal, performance critical piece (the VM) in native code and build a complete Scheme environment on top of this. There has been many similar efforts (Scheme48, VSCM, etc), but AFAIK none of them exposed the VM directly.
That's the article which got me started with Forth for use in microprocessor-based weather stations. I eventually did write a scheme interpreter - as well as an SQL interpreter; a friend then wrote a scheme compiler, which ran scheme in forth code 28 times faster.
I wish I could resurrect that code, but it was saved to an ancient cd disk in the Apple compressed mode, and that file will not open now.
I don't know Forth or Lisp very well, so instead of bootstrapping one or the other, I could make a Forth that could compile the Lisp and a Lisp that could compile the Forth
This was discussed in comp.lang.forth a few years back. It's very interesting as an instructional tool but is not optimal for GForth.
For example the symbol table is written in Forth whereas it would be faster to use GForth wordlists which give you a hashed lookup method.
There is a string-to-number routine written but GForth has >NUMBER which can handle double precision conversions and is in the kernel.
However it is all there and it works. For an experience Forth user it wouldn't take too much to improve it.
If it was compiled on a native code Forth compiler like SwiftForth, iForth or VFX and with a few better uses of the internal system resources it would be fast enough to useful I suspect.