Friday, October 31, 2008

DROPPED

SNAP Dropped!


As of now, I am dropping support for SNAP, as well as the tangentially related project Arc-F.


However, not all is lost; the ideas behind SNAP and Arc-F will make it into a new Lisplike language, "hl" (blog). I will retain this blog for reference and archiving.


Basically, hl will be a merging of the runtime VM of SNAP with the slightly-different build of Arc-F (such as multimethods and packages, as well as a revamped sequence/scanner handling and base functions).


Stefano Dissegna will be helping me with hl, but I would like to also invite others who are curious to contribute.

Saturday, August 30, 2008

Interlude: the LIFO heap

(i feel obligated to also give an update on the i/o subsystem first; i currently have a small basic core for the central i/o process, and am now trying to grok the c++ interface of libev. i implemented this bit of optimization to the machine some time ago)


As I mentioned in the follow-up to the post regarding continuation closures, one thing pointed out in the linked Parrot article, "What C's Memory Management Gets Rightish", is that a stack-like allocator for call frames helps tremendously in easing the pressure in the garbage collector, since usually (barring call/cc) the lifetime of call frames is a stack.


So, since at least one stated goal of SNAP is to have a reasonably efficient system, I decided to implement this optimization.


(it is of course a valid concern that i'm doing an optimization when i have no real performance data for this particular virtual machine, and in particular since the machine doesn't even have i/o yet, but this is from a post from someone who appears to be a developer for parrot, which is quite well developed; their opinion thus counts quite a bit.)


A LIFO Heap


...is of course a stack. Like any stack it keeps track of the current "position" using a stack pointer, and like any stack, items can be pushed onto it and popped off. Of course, the items here are variable-length items (since some closures are inevitably larger than others), but that is a trivial detail.


Note however that our stack structure is special: it has to be garbage-collectible together with the rest of the heap. This is largely because it was developed after the main garbage collection algorithms, and it's easier to just let it be garbage-collectible so that we don't have to add special cases.


The Arc process type inherits from the Heap class. All Arc objects normally belong to a single Heap, i.e. belong to a single process.


Each Heap object handles one "main" memory area, represented by a Semispace class. A Heap object might actually have several Semispaces; this is because, when a process sends a message, it copies the data structure into a new Semispace and then sends the entire Semispace to the receiving process.


When data is allocated on a Heap, it actually just allocates from its main Semispace, if there is still space available on it. The Semispace allocation is very simple: it simply increments a pointer and returns the previous value.


When the Heap object decides to perform a garbage collection, it sums up the size of the main Semispace and all received Semispaces, then creates a new Semispace to hold the data. It then copies the live data into the new Semispace, which becomes the main Semispace afterwards. The Semispace may be resized if it turns out to be much, much larger than the actual live data.


Normally, memory allocation is done via the Heap objects:


Heap& hp = proc;
.
.
.
Generic* gp = new(hp) Cons();

The new operator simply ends up invoking an alloc() member on the Heap objects.


Although a valid syntax using Semispace references also exists, it is generally used only by the Heap objects.


A LIFO Semispace


For this optimization, however, we needed to have an additional kind of Semispace, called the LifoSemispace. (it could actually have been the same type as Semispace but that class was designed before i put this optimization up, so there were some bits of it that were inappropriate) The LifoSemispace does not have any inheritance relationship to the Semispace type at all, although it does have a similar set of methods.


The Semispace object supports only deallocating the most recently allocated memory area (this is necessary in case of a thrown exception in the constructor). However, the LifoSemispace allows deallocating, in reverse order, of all objects allocated on it.


And a LIFO aspect of the Heap


A LifoSemispace, of course, does not just float around: it's handled by a Heap object. We don't directly allocate on a LifoSemispace, in much the same way that we don't normally directly allocate on a Semispace.


This does have the minor problem that the nice new(hp) syntax is already taken.


To handle this, we introduce a new class, the LifoHeap. This class is composed of a single pointer to a Heap, and is constructed from a pointer to Heap. It is thus as lightweight as a reference to a Heap.


Heap now also provides a lifo() member function, which returns a LifoHeap from the this pointer. In order to allocate from a Heap's LIFO allocation area, we simply use new(hp.lifo()):


Heap& hp = proc;
.
.
.
Generic* gp = new(hp.lifo()) ClosureArray();

First.... Out!


The LifoHeap object also provides a normal_dealloc(...) member function. This member attempts to deallocate the specified Generic object, if it happens to be 1) allocated on the LifoHeap and 2) is the most recently allocated object on the LifoHeap.


This function will silently fail if the two conditions above are not met. Since objects on LIFO allocation are still subject to garbage collection, a failed deallocation is still OK: the memory space will simply be reclaimed automatically.

Wednesday, August 27, 2008

The backlog

This is just a list of stuff I'd like to work on, as soon as I finish the I/O subsystem:



  1. Proper polymorphic functions or methods - i.e. functions that fix the curse of chains


  2. Symbol-based packages, as suggested by cchooper. Probably I'll go with what I proposed.


  3. Implement some more higher-order function bytecodes, i.e. flesh out the reducto family of bytecodes.


  4. Improve arc2c, particularly quasiquotes, destructuring function variables, and macros


  5. Add tables to the runtime, since tables are used to represent data structures in the arc2c compiler.


Tuesday, August 19, 2008

Update: I/O

It's been some time since my last post on this blog, so I feel obligated to report a bit on what I've been doing.


We call it Input-Output


Currently I'm working on the I/O subsystem. I'm trying to concentrate on this now instead of adding various features to SNAP (and thinking up various axioms to help make Arc a better language for creating building blocks), and I'm forming a little backlog list while I'm building I/O.


One problem we have here is concurrent access to an I/O port. Of course, concurrent access to a port doesn't, quite, make sense: if you want to keep track of which one of several processes should be accessing the port right now, you'd have to use some sort of serializing system (i.e. message passing). In general having just one process keep access to the port and have it handle the synchronization will be simpler and probably easier to maintain.


However, the point is that in SNAP we will allow you to do this while making sure that the virtual machine doesn't crash as a whole, and that your process doesn't crash others just because they are effectively sharing the resource.


The other problem is that we'll be using green threads in the execution subsystem. This means that context switching is done in an explicit manner, and in theory, it should be possible to "run" multiple processes even without using OS threads. This means that we have to use nonblocking and/or asynchronous I/O.


No time to wait


Typically, I/O operations will wait for the I/O to complete. In the case of input from a user terminal or from a network socket, this means that if data is not available, we must wait.


However, waiting is not acceptable: we might have some other process that could be running and isn't going to use that port. This means that we should be able to determine if an I/O port has data available, or can accept data, and only talk to the I/O port if so; we need to use asynchronous I/O.


Surprisingly, Microsoft Windows seems to be better at asynchronous I/O than Unix-based OS'es. POSIX defines an asynch I/O interface but it doesn't appear to be well supported among otherwise POSIX-compliant operating systems, and we have a hodgepodge of interfaces, such as the Linux-only epoll and the Sun-only kqueue. Some of these interfaces are not even well supported and/or particularly stable; the only thing that appears reliable is the most basic select(), which has efficiency problems. (and of course, efficiency is never a concern, unless it is)


So, the I/O system backend has to be easily swapped with other back-ends. I'm currently implementing around libev, which was inspired by libevent. libev is newer (and consequently, probably less bug-free) and faster, but is limited only to the hodgepodge of interfaces supported by Unix-likes, while libevent is older and more well developed, and includes a Windows backend.


The I/O system backend, however, is presented to the rest of the SNAP VM world by the Central I/O Process.


The Central I/O Process


The Central I/O Process handles all the I/O done in the system, and feeds it into the backend. This allows the backend to be lock-free: it can only be run from one OS-level thread, specifically whichever drew the short stick and got the central I/O process. (libev and libevent supposedly properly support multiple threads, as long as you use the "reentrant" interface functions, but I'd rather use the default interface)


The Central I/O Process, like any good process, can also accept messages and send them. It accepts a set of "request" messages, which includes a tag, the source process, and the port data object, and when the backend has completed the task, sends a "response" message - either an "ok" message or an error - back to the requesting process.


Crucially, the Central I/O Process keeps its hands off the port data object. The exact format of the port data object is not known by the Central I/O Process; the port data objects are created and used only by the backend. Thus, the port data objects are effectively opaque to the rest of the SNAP virtual machine.


The Arc I/O Ports


But having a Central I/O Process is not sufficient. The problem is that part of the backend's assumptions include the fact that at any one time, for a particular port, only one asynchronous I/O event is on-going. This means that access to the I/O ports must be synchronized. In SNAP and similar message-passing concurrency environments, synchronization is handled by isolating the synchronized resource into a separate process.


Thus, the I/O ports on the Arc side are not even the opaque port data objects; they are wrappers around a process ID for a process which handles the synchronization of the actual port data objects.


Yes, asynchronous I/O is hard. ^^

Sunday, August 3, 2008

The base functions

In a few of my recent posts on arclanguage.com, as well as the reducto discussion on this blog, I have been showing some functions that begin with the prefix <base>. However, I have been thinking of the <base> functions for some time already; their primary motivation is to make it easier to define new object types.


What started me was when I was experimenting with defining new object types in Arc, an example of which is my create your own collection series. Now, one bit I thought would be nice for collections is to allow them to be composed, and yet still be assignable:


(= foo (file-table "/tmp/"))
(= foos foo:string)
(= foos.2 "2")

Basically, the assignment to foos above would be equivalent to (= (foo (string 2)) "2")


In order to allow this, I needed to redefine compose. Unfortunately, I needed to redefine the complete compose.


This is when I started thinking about the <base> functions.


Dividing the Concept


Conceptually, compose is simply a reduction of a more basic "function-composition" operation on its arguments. We can thus divide compose into a reducer operation, implemented using the reducto bytecode, and a more basic operation, which we would prepend with the prefix <base>.


Then anyone who wishes to override compose doesn't have to reimplement the entire function; just the part he or she is interested in: the basic operation. Instead of handling the the case for having one argument, or zero arguments, or N arguments, only the simple case - the two argument case - needs to be handled.


compose is not the only function that would benefit from this separation; the mathematical and comparison functions would also benefit. This simplifies the effort needed to implement various numeric systems, such as quaternions.


Adding Lists


Not all is well in a <base>-ic world, though. The problem is performing + on lists, which performs a copying concatenation of the lists.


If we define (+ a b c) as being, effectively, equivalent to (<base>+ (<base>+ a b) c), then the inner <base>+ will create copies of a and b, and then return it. And then the outer <base>+ will recopy the returned list, even though reusing it would have been better; we thus end up with more allocation than we wanted.


The problem is also repeated, in a less memory-pressing manner, when working with SNAP integers. SNAP integers are boxed integers, i.e. they are part of the object hierarchy and will occupy real memory. + and other mathematical operations will have to create copies of the integer.


Although there is a solution (which I hope to fully present in a future post), it involves playing some significant tricks on the type system, and will probably also need to use significant portions of a proposed multimethod dispatching scheme I've thought of.


In brief, it means defining a "temporary" type which encloses the actual type of the object. We can then define overloads of the <base> functions which will reuse the objects in the encapsulating temporary type.


For now, however, we may have to first accept that this axiom will add overhead, but for that overhead it gains greater flexibility and ease-of-use.

Wednesday, July 30, 2008

symeval

A common problem in Arc is that macros - even well-designed ones - can introduce subtle bugs. I thus propose the addition of a new axiom, a special form called symeval.


For example, consider a library which has a complicated global function:


(def cplx-fun (sym trap val)
  (some-complex-expression sym trap val))

Now the library designer intends to use this as a somewhat-internal function; what he or she expects people to use is a macro that wraps the call:


(mac cplx (val . body)
  (w/uniq sym
    `(cplx-fun ',sym (fn () ,@body) val)))

And of course, all is well and good... the library was so useful and powerful that people no longer felt the need to actually look inside the library and bother with how it was implemented.


Until the day, of course, where some random newbie decided to do this:


(def my-fun (x)
  (let cplx-fun (fn (tmp) (cplx 42 tmp))
    (cplx answer
          (cplx-fun x))))

Unfortunately, the global 'cplx-fun was overridden by the local version. So now the newbie had to bother with how the library worked, when it was already so good and perfect that nobody should have had to.


symeval


I thus propose the addition of a new axiom, 'symeval. symeval is a special form, like fn or if, and thus cannot be overridden at all.


symeval will evaluate its argument, and check if the result is a symbol. If it's a symbol, it evaluates the symbol in the global environment.


As an optimization, the implementation can treat something of the form (symeval 'foo) as being equivalent to an ordinary global variable read; the important thing is that (symeval 'foo) will always read the global variable regardless of the existence of any foo variable in the same context.


This should be trivial to add in arc2c and hence in SNAP: we need only to replace symeval forms (probably in xe) with global variable references if the form is quoted, and transform it into a primitive otherwise.


Thus the macro is now:


(mac cplx (val . body)
  (w/uniq sym
    `(symeval!cplx-fun ',sym (fn () ,@body) val)))

Tuesday, July 29, 2008

Continuation Closures

Closures in SNAP, much like closures in arc2c, are simply flat arrays of references to Arc objects. Closures in SNAP are also used to represent functions on the Arc side; that is, what Arc thinks is a function, is actually an object of the class Closure. The Closure object also contains a reference to an Executor, which is a scary name for something that executes. (more about Executor in another post)


Closures in SNAP are expected to be immutable: once they've been constructed, their contained values are not supposed to change (there isn't anything that stops you from doing that on the C++ side, other than a nasty comment). On the other hand, continuation closures have a reusable() member function which specifies whether or not the continuation closure is reusable, i.e. whether or not its contents can be modified with new content. Continuation closures are represented by the KClosure class, which is derived from class Closure.


Why KClosure?


Having a separate KClosure class just for continuations is largely an optimization to avoid allocating excessive amounts of closures. A continuation closure is usually just invoked once, at the end of a normal function. Typically, after the continuation itself ends - when it must invoke another function - its closure can be freed.


Of course, this is a garbage-collected system, and a copying one at that. "Freeing" memory involves not copying things, and "not copying" is what you want to do most of the time. The best you can do is to reuse a piece of memory you've already allocated.


So a continuation can freely reuse its closure, because it won't get invoked again (barring a minor case i'll get to later) and we don't expect there to be any live data that still refers to the continuation.


And since the continuation closure is still a continuation closure, we reuse the memory area by constructing any new continuation closures in the same, current continuation closure. So a continuation which finds that it must invoke another, non-continuation function - which would expect a continuation - can reuse its own continuation closure; this allows us to skip allocating for a large number of cases.


Continuation closures have an invariant. A reference continuation closure can only exist on the stack, or as an entry in another continuation closure. Also, continuation closures cannot possibly form anything other than a straight singly-linked list.


So why have a reusable() member function? The problem is a little feature called 'ccc.


'ccc (known in the Scheme world as call-with-current-continuation or call/cc) captures the continuation and returns it all trussed up. This also means that the continuation might be invoked using the captured continuation closure more than once - and our assumption so far has been that the continuation is invoked only once.


This is solved by having 'ccc call the banreuse() member function on the captured continuation closure, which causes reusable() to always return false afterward. This means that the continuation closure cannot be reused, and the VM will actually copy the continuation closure when reuse is requested (a sort of "copy-on-write").


Continuation closures that are not reusable may safely violate the invariant. Also, such continuation closures may now form any directed graph, and can even be a cyclic graph.


(this optimization was inspired by the parrot article refcounting isn't all bad and its followup what c's memory management gets rightish; instead of a full refcount, i use a single bit which means "more than one reference" - basically, that's the bit toggled by banreuse() and queried by reusable() - and like in the followup i use this only for continuation closures. i suggested this in a post on the Arc forum)


<edit>


Follow-up: can it go even faster?


Due to CPS conversion, code generated by the arc2c compiler (around which the SNAP virtual machine is built) tends to allocate a lot of closure objects, especially continuation closures.


As mentioned in the "what c's memory management gets rightish" link above, making use of a stack-like allocator - where continuation closure frames can be allocated and deallocated in last-in-first-out order - would severely decrease garbage collection pressure. In a CPS implementation like SNAP, this means that, barring 'ccc, a continuation that has exited can have its closure deallocated almost immediately.


This could in fact be done in SNAP; any continuation that exits without reusing its closure can specify deallocation of the closure; the deallocator attempts to deallocate the continuation if's the last allocated structure, or leaves it on the stack if not.


This will of course complicate GC somewhat. It also means that we have for each heap two separate allocation areas: one for normal allocation, and one for LIFO structures. Since the call graph is usually structured as a singly-linked list, this will allow even better reuse of closures; in fact, the current strategy of explicitly reusing can probably be replaced with the LIFO structure..


<edit>


Follow-up: LIFO implemented


LIFO allocation-deallocation schemes have been implemented

Monday, July 28, 2008

reducto: the most complex single bytecode in SNAP

stefano recently brought up a potential efficiency issue in implementations of Arc. Specifically, the problem is that some variadic functions, such as '+, are really the reduction of a more basic function on its arguments, i.e.


(def + rest
  (reduce <base>+ rest))

The problem is that variadic functions normally put their arguments in a list, and in the most common case, '+ and related functions (such as '-, '*, and '/) will only be called on two arguments. Consing here is just a waste of memory and makes the dreaded GC loom nearer.


The solution which stefano proposed is to inline the code, so that potentially (+ a b c d) will become (<base>+ (<base>+ (<base>+ a b) c) d). However, inlining is a bit difficult in a dynamic language with 'eval; you need some way to "uninline" the code if the global functions are redefined.


(it also assumes that '+ associates left-to-right, but it's quite valid for the programmer to modify '+ to work right-to-left)


My alternative, since I am defining my own virtual machine and bytecode interpreter, is simply to make a really, really complex bytecode which handles this problem and avoids allocating in the most common cases.


reducto


reducto effectively implements the reduction function and avoid allocation for the common cases where only two parameters are needed. However, it is not completely standalone; it also needs to know the function(s) to apply. It thus expects that the function it is part of contains three entries in its closure, which are three functions or applicable objects.


The reduction is only useful if two or more arguments are given; special handling must be specified for the case where 0 or 1 arguments are given. For example, '- negates its argument if only 1 argument is given. This is why the closure has 3 function entries - (1) a function that handles the 0-argument case, (2) a function that handles the 1-argument case, and (3) a function that is used for reduction, and handles 2-argument and higher cases.


Running reducto: the simple cases


reducto chooses which function to apply based on the number of arguments it finds on the stack. Two of the arguments are the current closure and the continuation closure; the rest of the stack entries are the arguments given by the Arc code (the "Arc arguments"). If 2 or fewer Arc arguments are on the stack, then it simply directly indexes the closure: the 0-argument function is in closure index 0, 1-argument is in closure index 1, etc. For such simple cases, it performs the call directly, modifying only the "current closure" entry of the stack - no allocation is done. Simple, direct, and cheap in both memory and time.


Of course, the interesting bit is for the non-trivial cases where there are 3 or more Arc arguments.


3 arguments or more


For the non-trivial cases, reducto must allocate a closure and store the extra arguments there; in order to reduce allocation, it uses a continuation closure, which can be reused. The continuation closure holds the current continuation, the function to invoke, and all the arguments except the first two, as well as an index integer.


The actual size of the created closure depends on the number of arguments. For each extra argument beyond the first two, a slot must be reserved for that argument. In addition, the continuation closure must store the final continuation, the function to invoke, and the index; in effect, the size of the closure corresponds to 1 + number of Arc arguments.


In the C++ source, the reducto_continuation executor handles the actual reduction operation; it refers to the continuation closure and calls the reduction function for each argument.


The index to be specifies which argument to call with next. Now, SNAP integers are currently unboxed; fortunately, we know that the index number will not be used by anything else, so we can simply directly modify the contents of the box without additional allocation.


Handling nonreusability


Recall that 'ccc could make a continuation closure non-reusable. In such a case, we cannot modify the index integer.


Since the index integer is now nonmodifiable and the continuation closure is now immutable, we will have to create a fresh continuation closure and copy the contents. This allows 'ccc to operate properly.


(note that now that i've thought about it, we might actually still be able to just create a closure that contains an index number and a reference to the non-reusable continuation closure, and suffer through an additional indirection, which would be a small price to pay compared to copying; we would need to define an additional executor for such a continuation, though)

Sunday, July 27, 2008

Shared Nothing Arc Processes: the introduction

Shared Nothing Arc Processes (SNAP) is a virtual machine designed for a massively multiprocess - where communications are done by shared-nothing message passing - implementation of Arc.


Why shared-nothing message passing?


As an electronics engineer doing mostly digital designs, I think I can safely say that multicore, highly parallel programming is the future. ^.^ I find Erlang interesting, although I don't really like its syntax.


Of course, there's another alternative for coordinating multiple processes, STM. And maybe it can be done for Arc. There's just one problem: I can grok shared-nothing message passing, but I can't grok STM. So, at least for now, shared-nothing message passing it is.


Why Arc?


Because it's very similar to Cadence SKILL, the extension language of Cadence, which develops Electronic Design Automation (EDA) tools. I'm an electronics engineer specializing (somewhat) in IC design and testing, which means that I make use of Cadence products quite a bit in the office; SKILL was the first Lisp-like I seriously programmed in.


Arc and SKILL have the following similarities:



  • Lisp-1, at least for SKILL++ mode.

  • t and nil

  • List-based macros, like in Common Lisp and unlike hygienic macros in Scheme


Why bother?


Why do this, when there's already a good, mature implementation of shared-nothing message passing, Erlang? Why do this when PG has finally released Arc, and some dabblers are building Arc implementations from scratch all over, including at least one compiler that compiles to C, another one that compiles to native x86 code, and an implementation in Java? And Lisp-likes are already being created with shared-nothing message passing, such as an on-going (as of Jul 2008) LispNYC Summer of Code project.


Well, the world needs yet another dead open source project.


Okay, it's mostly because I'm curious about how to implement an efficient language system from scratch, and I'm curious about the special problems that, say, JIT faces when another thread might be compiling the same code. Also, I happen to like Arc, but I have some issues with its axioms, and I'm also using SNAP as a sort of testbench to treat what I feel are better axioms for Arc.


Goals



  1. Massively multiprocess. If Erlang can launch hundreds of thousands of processes, SNAP should too!

  2. Support OS threads, so we can take advantage of multiple cores if the OS can do so. By the same token SNAP should also support not using OS threads, meaning that operations that would block a single process should still allow other processes to continue.

  3. Efficiency, because there's little point in multiprocessing if you're just being inefficient. This includes interpreter speed, as well as efficiency in garbage collection and message passing.

  4. Make a good standard for Arc. Including ways of introspecting into closured functions (which no Lisp-like has ever done), as well as introspecting function code, to allow serialization of functions from the Arc-side.