Wednesday, July 30, 2008

symeval

A common problem in Arc is that macros - even well-designed ones - can introduce subtle bugs. I thus propose the addition of a new axiom, a special form called symeval.


For example, consider a library which has a complicated global function:


(def cplx-fun (sym trap val)
  (some-complex-expression sym trap val))

Now the library designer intends to use this as a somewhat-internal function; what he or she expects people to use is a macro that wraps the call:


(mac cplx (val . body)
  (w/uniq sym
    `(cplx-fun ',sym (fn () ,@body) val)))

And of course, all is well and good... the library was so useful and powerful that people no longer felt the need to actually look inside the library and bother with how it was implemented.


Until the day, of course, where some random newbie decided to do this:


(def my-fun (x)
  (let cplx-fun (fn (tmp) (cplx 42 tmp))
    (cplx answer
          (cplx-fun x))))

Unfortunately, the global 'cplx-fun was overridden by the local version. So now the newbie had to bother with how the library worked, when it was already so good and perfect that nobody should have had to.


symeval


I thus propose the addition of a new axiom, 'symeval. symeval is a special form, like fn or if, and thus cannot be overridden at all.


symeval will evaluate its argument, and check if the result is a symbol. If it's a symbol, it evaluates the symbol in the global environment.


As an optimization, the implementation can treat something of the form (symeval 'foo) as being equivalent to an ordinary global variable read; the important thing is that (symeval 'foo) will always read the global variable regardless of the existence of any foo variable in the same context.


This should be trivial to add in arc2c and hence in SNAP: we need only to replace symeval forms (probably in xe) with global variable references if the form is quoted, and transform it into a primitive otherwise.


Thus the macro is now:


(mac cplx (val . body)
  (w/uniq sym
    `(symeval!cplx-fun ',sym (fn () ,@body) val)))

Tuesday, July 29, 2008

Continuation Closures

Closures in SNAP, much like closures in arc2c, are simply flat arrays of references to Arc objects. Closures in SNAP are also used to represent functions on the Arc side; that is, what Arc thinks is a function, is actually an object of the class Closure. The Closure object also contains a reference to an Executor, which is a scary name for something that executes. (more about Executor in another post)


Closures in SNAP are expected to be immutable: once they've been constructed, their contained values are not supposed to change (there isn't anything that stops you from doing that on the C++ side, other than a nasty comment). On the other hand, continuation closures have a reusable() member function which specifies whether or not the continuation closure is reusable, i.e. whether or not its contents can be modified with new content. Continuation closures are represented by the KClosure class, which is derived from class Closure.


Why KClosure?


Having a separate KClosure class just for continuations is largely an optimization to avoid allocating excessive amounts of closures. A continuation closure is usually just invoked once, at the end of a normal function. Typically, after the continuation itself ends - when it must invoke another function - its closure can be freed.


Of course, this is a garbage-collected system, and a copying one at that. "Freeing" memory involves not copying things, and "not copying" is what you want to do most of the time. The best you can do is to reuse a piece of memory you've already allocated.


So a continuation can freely reuse its closure, because it won't get invoked again (barring a minor case i'll get to later) and we don't expect there to be any live data that still refers to the continuation.


And since the continuation closure is still a continuation closure, we reuse the memory area by constructing any new continuation closures in the same, current continuation closure. So a continuation which finds that it must invoke another, non-continuation function - which would expect a continuation - can reuse its own continuation closure; this allows us to skip allocating for a large number of cases.


Continuation closures have an invariant. A reference continuation closure can only exist on the stack, or as an entry in another continuation closure. Also, continuation closures cannot possibly form anything other than a straight singly-linked list.


So why have a reusable() member function? The problem is a little feature called 'ccc.


'ccc (known in the Scheme world as call-with-current-continuation or call/cc) captures the continuation and returns it all trussed up. This also means that the continuation might be invoked using the captured continuation closure more than once - and our assumption so far has been that the continuation is invoked only once.


This is solved by having 'ccc call the banreuse() member function on the captured continuation closure, which causes reusable() to always return false afterward. This means that the continuation closure cannot be reused, and the VM will actually copy the continuation closure when reuse is requested (a sort of "copy-on-write").


Continuation closures that are not reusable may safely violate the invariant. Also, such continuation closures may now form any directed graph, and can even be a cyclic graph.


(this optimization was inspired by the parrot article refcounting isn't all bad and its followup what c's memory management gets rightish; instead of a full refcount, i use a single bit which means "more than one reference" - basically, that's the bit toggled by banreuse() and queried by reusable() - and like in the followup i use this only for continuation closures. i suggested this in a post on the Arc forum)


<edit>


Follow-up: can it go even faster?


Due to CPS conversion, code generated by the arc2c compiler (around which the SNAP virtual machine is built) tends to allocate a lot of closure objects, especially continuation closures.


As mentioned in the "what c's memory management gets rightish" link above, making use of a stack-like allocator - where continuation closure frames can be allocated and deallocated in last-in-first-out order - would severely decrease garbage collection pressure. In a CPS implementation like SNAP, this means that, barring 'ccc, a continuation that has exited can have its closure deallocated almost immediately.


This could in fact be done in SNAP; any continuation that exits without reusing its closure can specify deallocation of the closure; the deallocator attempts to deallocate the continuation if's the last allocated structure, or leaves it on the stack if not.


This will of course complicate GC somewhat. It also means that we have for each heap two separate allocation areas: one for normal allocation, and one for LIFO structures. Since the call graph is usually structured as a singly-linked list, this will allow even better reuse of closures; in fact, the current strategy of explicitly reusing can probably be replaced with the LIFO structure..


<edit>


Follow-up: LIFO implemented


LIFO allocation-deallocation schemes have been implemented

Monday, July 28, 2008

reducto: the most complex single bytecode in SNAP

stefano recently brought up a potential efficiency issue in implementations of Arc. Specifically, the problem is that some variadic functions, such as '+, are really the reduction of a more basic function on its arguments, i.e.


(def + rest
  (reduce <base>+ rest))

The problem is that variadic functions normally put their arguments in a list, and in the most common case, '+ and related functions (such as '-, '*, and '/) will only be called on two arguments. Consing here is just a waste of memory and makes the dreaded GC loom nearer.


The solution which stefano proposed is to inline the code, so that potentially (+ a b c d) will become (<base>+ (<base>+ (<base>+ a b) c) d). However, inlining is a bit difficult in a dynamic language with 'eval; you need some way to "uninline" the code if the global functions are redefined.


(it also assumes that '+ associates left-to-right, but it's quite valid for the programmer to modify '+ to work right-to-left)


My alternative, since I am defining my own virtual machine and bytecode interpreter, is simply to make a really, really complex bytecode which handles this problem and avoids allocating in the most common cases.


reducto


reducto effectively implements the reduction function and avoid allocation for the common cases where only two parameters are needed. However, it is not completely standalone; it also needs to know the function(s) to apply. It thus expects that the function it is part of contains three entries in its closure, which are three functions or applicable objects.


The reduction is only useful if two or more arguments are given; special handling must be specified for the case where 0 or 1 arguments are given. For example, '- negates its argument if only 1 argument is given. This is why the closure has 3 function entries - (1) a function that handles the 0-argument case, (2) a function that handles the 1-argument case, and (3) a function that is used for reduction, and handles 2-argument and higher cases.


Running reducto: the simple cases


reducto chooses which function to apply based on the number of arguments it finds on the stack. Two of the arguments are the current closure and the continuation closure; the rest of the stack entries are the arguments given by the Arc code (the "Arc arguments"). If 2 or fewer Arc arguments are on the stack, then it simply directly indexes the closure: the 0-argument function is in closure index 0, 1-argument is in closure index 1, etc. For such simple cases, it performs the call directly, modifying only the "current closure" entry of the stack - no allocation is done. Simple, direct, and cheap in both memory and time.


Of course, the interesting bit is for the non-trivial cases where there are 3 or more Arc arguments.


3 arguments or more


For the non-trivial cases, reducto must allocate a closure and store the extra arguments there; in order to reduce allocation, it uses a continuation closure, which can be reused. The continuation closure holds the current continuation, the function to invoke, and all the arguments except the first two, as well as an index integer.


The actual size of the created closure depends on the number of arguments. For each extra argument beyond the first two, a slot must be reserved for that argument. In addition, the continuation closure must store the final continuation, the function to invoke, and the index; in effect, the size of the closure corresponds to 1 + number of Arc arguments.


In the C++ source, the reducto_continuation executor handles the actual reduction operation; it refers to the continuation closure and calls the reduction function for each argument.


The index to be specifies which argument to call with next. Now, SNAP integers are currently unboxed; fortunately, we know that the index number will not be used by anything else, so we can simply directly modify the contents of the box without additional allocation.


Handling nonreusability


Recall that 'ccc could make a continuation closure non-reusable. In such a case, we cannot modify the index integer.


Since the index integer is now nonmodifiable and the continuation closure is now immutable, we will have to create a fresh continuation closure and copy the contents. This allows 'ccc to operate properly.


(note that now that i've thought about it, we might actually still be able to just create a closure that contains an index number and a reference to the non-reusable continuation closure, and suffer through an additional indirection, which would be a small price to pay compared to copying; we would need to define an additional executor for such a continuation, though)

Sunday, July 27, 2008

Shared Nothing Arc Processes: the introduction

Shared Nothing Arc Processes (SNAP) is a virtual machine designed for a massively multiprocess - where communications are done by shared-nothing message passing - implementation of Arc.


Why shared-nothing message passing?


As an electronics engineer doing mostly digital designs, I think I can safely say that multicore, highly parallel programming is the future. ^.^ I find Erlang interesting, although I don't really like its syntax.


Of course, there's another alternative for coordinating multiple processes, STM. And maybe it can be done for Arc. There's just one problem: I can grok shared-nothing message passing, but I can't grok STM. So, at least for now, shared-nothing message passing it is.


Why Arc?


Because it's very similar to Cadence SKILL, the extension language of Cadence, which develops Electronic Design Automation (EDA) tools. I'm an electronics engineer specializing (somewhat) in IC design and testing, which means that I make use of Cadence products quite a bit in the office; SKILL was the first Lisp-like I seriously programmed in.


Arc and SKILL have the following similarities:



  • Lisp-1, at least for SKILL++ mode.

  • t and nil

  • List-based macros, like in Common Lisp and unlike hygienic macros in Scheme


Why bother?


Why do this, when there's already a good, mature implementation of shared-nothing message passing, Erlang? Why do this when PG has finally released Arc, and some dabblers are building Arc implementations from scratch all over, including at least one compiler that compiles to C, another one that compiles to native x86 code, and an implementation in Java? And Lisp-likes are already being created with shared-nothing message passing, such as an on-going (as of Jul 2008) LispNYC Summer of Code project.


Well, the world needs yet another dead open source project.


Okay, it's mostly because I'm curious about how to implement an efficient language system from scratch, and I'm curious about the special problems that, say, JIT faces when another thread might be compiling the same code. Also, I happen to like Arc, but I have some issues with its axioms, and I'm also using SNAP as a sort of testbench to treat what I feel are better axioms for Arc.


Goals



  1. Massively multiprocess. If Erlang can launch hundreds of thousands of processes, SNAP should too!

  2. Support OS threads, so we can take advantage of multiple cores if the OS can do so. By the same token SNAP should also support not using OS threads, meaning that operations that would block a single process should still allow other processes to continue.

  3. Efficiency, because there's little point in multiprocessing if you're just being inefficient. This includes interpreter speed, as well as efficiency in garbage collection and message passing.

  4. Make a good standard for Arc. Including ways of introspecting into closured functions (which no Lisp-like has ever done), as well as introspecting function code, to allow serialization of functions from the Arc-side.