Retro computing programming language stuff 

A fun project I've done some thought on is a small JavaScript-like REPL for the Atari 800, like interacting with old BASIC or LOGO.

The 6502 CPU and its surrounding machine are very limited to modern eyes :) but I think it's possible to make a workable toy implementation.

RAM constraints are key, as is the lack of { curly braces } in the ATASCII character set. ;)

Retro computing programming language stuff 

Still just collecting notes on this one. I expect runtime performance to suffer in favor of RAM usage when it comes to data structures.

Retro computing programming language stuff 

classic Atari LOGO implements garbage collected list, number, and symbol/string types as well as bytecode functions, which is not too far off from the base needs of something on the rough level of ES3.

IIRC LOGO implemented lists in the classic LISP way (linked lists); adding mutable arrays & hashmaps seems doable.

it's a lot of memory indirection, which is *slow* on the 6502, but what you gonna do :D

Retro computing programming language stuff 

the reference manual for atari logo describes the memory layout, even!

archive.org/details/AtariLOGOR

complex memory objects like "floating point numbers" or "symbols/strings" are split up over linked lists of fixed-size nodes :D

bad thing: need for lots of pointers wastes total available memory

good thing: no worries about fragmentation after GC

Retro computing programming language stuff 

fixed-size nodes are great for LISP-y linked lists, but the lack of constant-time indexing for strings and arrays is bad for a JS-like language.

i'm considering variable-sized allocations with a simple compacting GC, which should be straightforward to implement.

making the GC smarter could perform better but at greater code size. maybe a simple separation between fixed-size (number, object struct) and variable-size allocations (string, hashmap, array)?

Retro computing programming language stuff 

I looked into a compacting GC for a variable-sized heap, and while it sounds doable it feels a) really fiddly and b) like it's gonna bloat my code size, and I don't have a lotta bytes to spare ;)

Instead of spending bytes on large allocations to expand property lists and arrays into, then compacting stuff, maybe just stay simple and use fixed-size nodes and binary trees...

Retro computing programming language stuff 

Preserving original property insertion order for enumeration *and* having a hash-based semi-random-access lookup feels ugly though.

Either I have to give up the ordering property, or revise my data structure ideally without doubling its size. ;)

Retro computing programming language stuff 

In the node-based model, a 2-property object {x:160, y:120} takes 35 bytes, plus the storage for the actual strings and numbers.

5 * (2 + log2(n) + 2n) bytes

Hash collisions cost extra nodes.

obj - props - proptree - propdesc(1) - string "x"
\ \ \ number 160
\ \ propdesc(2) - string "y"
\ \ number 120
\ list - propdesc(1)
\ list - propdesc(2)

Retro computing programming language stuff 

Using variable length structures would be only 20 bytes, but when adding a third property, the hashmap & list would have to be resized and the first two props copied over. If adjacent RAM isn't available, that leaves a 15-byte hole in memory which might have to be compacted later.

Retro computing programming language stuff 

can shave off some more nodes here:
* list doesn't need a list node at the tail, can save 5 bytes
* tree isn't needed if only 1 or 2 props
* list isn't needed if only 1 prop

Retro computing programming language stuff 

im in ur lisp nodes, making javascript objects

Retro computing programming language stuff 

ooh i just got an evil idea

instead of a 5-byte node with a GC marker bit and 7-bit type tag, have 4-byte nodes; move the GC bits to a bitfield keyed off the pointer, and store a 2-bit context-specific type tag *in the pointers*

JS values could distinguish between primitive, number, string, and object/function.

Object internals would have propdesc, proplist, proptree, and propmeta ([tree,list] pair).

might work?

Retro computing programming language stuff 

IMO this is too clever for its own good. ;) Still gonna explore it a bit but leaning towards a 5-byte node.

I can also simplify the GC by using two more bits of the tag byte to indicate whether bytes 2/3 & 4/5 are pointers, so it doesn't have to grok all the types.

I can then make a sort of tiered bitfield out of the rest of the type tags, so eg 'object' and 'function' can be both distinguished easily (for calls) and identified together easily (for prop access)

Retro computing programming language stuff 

so first: i can use a free list in the free blocks to allocate new nodes very fast

second: if i take a second bit for GC state to do tri-color marking, i can do the mark phase in chunks, without needing a large stack for worst-case traversals

this may be beneficial for interactive performance :D

however it leaves me with only 4 bits for the type tag, which is tight, or drop the ref tag bits and distinguish pointers some other way (or safely do imprecise gc)

Retro computing programming language stuff 

only the sweep needs to pause the world, doing a full walk of the heap (load one byte per node, check the top bits) then for each if it's newly freed add it to the free list by saving the old free position and resetting the type/gc tag

In a pinch, the free list could be supplemented in chunks, but the type tag must be resaved as free space.

Retro computing programming language stuff 

first draft of a sweep routine comes out to 1,030 cycles per page (256 bytes, with 51 5-byte nodes), or about 66,000 cycles for a 16 KiB heap.

That should be about 27 ms on the Atari (1.79 MHz for NTSC), which is a touch under two 60 Hz video frames.

Not bad, but maybe improvable.

Retro computing programming language stuff 

untested 6502 assembly for the sweep func :D

gist.github.com/brion/95e8b78c

Retro computing programming language stuff 

whoops got division wrong lol

currently at 37ms for 16 KiB heap

Retro computing programming language stuff 

down to about 28 ms with an unrolled loop and self-modifying code to shave a few cycles off the hot zone ;)

gist.github.com/brion/95e8b78c

Retro computing programming language stuff 

agh i left some of that in inconsistent state, i don't think the loop works quite right. :D

i'll fiddle with it later, it's late. ;)

night night all

Retro computing programming language stuff 

can do a little tidying up in there still, i definitely shouldn't stay up past midnight programming anymore ;)

after work ;)

Retro computing programming language stuff 

Thinking also about JS-style arrays in this system.

If I don't care about property ordering, I can just treat them as string-keyed properties, at the cost of number->string conversions on every access. ;)

Having a second binary tree for integer-indexed properties has two advantages:
* skip the string conversion & hashing steps on lookup
* proper enumeration of integer-indexed props

The tree would have to be periodically re-balanced as it grows.

Retro computing programming language stuff 

Downside: to behave to JS spec, have to detect "100" as well as 100 as being an integer :D

However I could easily elide that compatibility bit and say ararys use real numbers ;)

Retro computing programming language stuff 

it occurs to me that i could store a marker bit in strings that says whether they look like a legit integer. could even pre-convert them.

Retro computing programming language stuff 

looked a little bit at larger nodes in the hopes of saving some bytes but it just makes the data structures uglier and some waste more bytes!

sticking with 5-byte nodes means LISP-style data structures are easy to reason about, and the tag byte can help w/ eliding nodes on small structures (eg, store a single item reference directly instead of via a list node, then replace it with list nodes when adding a second item)

Retro computing programming language stuff 

also thinking of a pointer tag to store small integers without a node; if the high byte of an address is 0, it can't be on the heap so we can use an 8-bit signed value in the low byte.

this requires addition of one branch instruction to the common path (2 cycles) before dereferencing refs that can be arbitrary JS values

Retro computing programming language stuff 

Got side-tracked to thinking some more about the binary tree model for object properties and arrays, and I think I'm starting to grok it.

I think this is my first actual binary tree implementation, despite many years of using them behind the scenes. ;)

For string keys, probably a simple 8-bit shift+xor hash.

For arrays the low byte of the index is the hash, which should produce natural flattening of the data structure by starting with the low bit.

Retro computing programming language stuff 

At each layer of the tree, the low bit of the candidate hash is checked; 0 goes to the left branch, 1 goes to the right branch.

If another tree node is found, shift to the next hash bit and continue.

If a non-tree node is found, it's a match. New inserts will replace it with a tree, then continue with the next bit of the hash.

If nil is found, the branch is empty and available for new inserts.

Retro computing programming language stuff 

The binary tree should cost n-1 additional nodes for an ideal hash and less than 256 props. To provide preserved-order enumeration of properties, a list can also be kept, at the cost of another n + 2 nodes.

Very small objects (0, 1, or 2 props) can elide a couple of nodes by storing a bare property descriptor or a bare list instead of a list header and a tree.

Retro computing programming language stuff 

For denser array storage, the extra nodes for property descriptors aren't needed as long as the hash for the random access tree has enough bits to prevent collisions (eg, go through all 16 bits of an integer key if needed, starting from the LSB)

This makes an array about 1/3 smaller already. Could also cut the binary map size by a bit or two at the cost of of having to do a couple extra list traversals.

Retro computing programming language stuff 

The random access binary tree is just as big as the ordered list (yay LISP-style data structures!) which is irksome, but dropping a bit or two can cut those by 50% or 75% at the cost of more list traversals, which I'll probably do because 16 KiB may be all I have to work with. ;)

Retro computing programming language stuff 

oh wait those lists are also big. haha silly me

Retro computing programming language stuff 

anyway for arrays it's totally i think worth shaving two bits off the index lookup tables; it doesn't add any new list nodes cause i've already got a full-ordered list.

that'll take it from circa 2n + 2 nodes to 1.25n + 2, with average of 1.5 list traversals after the binary search, a mere 6,260 bytes for a 1,000-entry array.

plus the storage of the entries, of course. ;)

Retro computing programming language stuff 

ah wait that won't work right will it? i'm starting at the low bit, not the high bit. dammit. ok maybe i need to just start from the high bit and periodically re-balance lists like a peasant ;_;

Retro computing programming language stuff 

ok that adds 2 more nodes to store the current & "hash" size. when current size reaches the max for this hash size, shift over one bit and add a layer to the binary tree

use indexes as hashes starting from the top bit (of the current size). this means trees will start left-heavy, then fill in on the right as number of items approaches the next power of 2, then jump back left-heavy.

last 1 or 2 bits of the hash can be implied by traversing the ordered list nodes.

Retro computing programming language stuff 

oh wait -- do I even *need* the list nodes?

if i'm using the bits from the MSB on down (with the unused bits elided), then I can iterate the tree in order trivially :D

this gets arrays down to n + 2 nodes (plus storage for the referenced objects, and couple nodes for the JS Array object)

Retro computing programming language stuff 

Also I can optimize the GC passes by putting a marker byte in the one free byte per page (51 5-byte nodes in 256 bytes). When flipping a node's GC state, set the bit corresponding to which of 4 states it is (3 GC colors or free).

Then the time-sensitive mark/sweep passes can skip pages with no nodes to process.

Retro computing programming language stuff 

taking a look through the atari memory map, a 48 KiB machine should allow for 8 KiB of code (either cartridge ROM or loaded from disk) and at least 24 KiB of data pages, to be divided between stack (fixed pointers) and heap (LISP-style nodes).

The stack serves as the garbage collection roots and storage for locals & return records.

Follow

Retro computing programming language stuff 

Current sketch for indexable arrays:

gist.github.com/brion/8cc97554

The ordered binary tree is about as compact as a linked list, but allows random access at O(log2 n). Iteration through the list can be faster than doing separate indexes as well.

Sparse arrays could work too as long as the tree is maintained correctly.

· · Web · 1 · 0 · 2

Retro computing programming language stuff 

Still going back and forth on how to break down the tag byte. Tri-color marking (which allows incremental GC) needs 2 bits. I need about 3 bits' worth to cover the base JS types. And if I want dense byte storage, I either needs bits to describe that or a clear ordering I can derive it from.

(eg, storing a 16-bit int or two 8-bit ints in place of a reference; the garbage collector needs to know which words are pointers!)

Retro computing programming language stuff 

An alternative is to use tagged pointers to store raw integers (8-bit is easy, more means bit fiddling) but this means less data density in nodes; eg a 6-byte float might require a 5-node linked list when storing raw bytes would allow packing it into 2 nodes.

Could do a lookup table by type (max of 64 available tag types) or cleverly arrange the types such that a bit check or lt/gt comparison can break it down quickly.

Sign in to participate in the conversation
Mastodon for Tech Folks

This Mastodon instance is for people interested in technology. Discussions aren't limited to technology, because tech folks shouldn't be limited to technology either!