It's no secret that I have a sweet spot for the Sinclair ZX Spectrum. One of the
things I was amazed by as an eight-year-old was the (then) incredible 256×192 pixel colour graphics. Using only 6.75
kilobytes of video RAM, the custom Ferranti ULA chip pieced together the video
signal 50 (or 60) times per second.
Software emulation of the Ferranti ULA has been done a lot of times, but reinventing the wheel is a great way of learning
new (or old) things, so I decided to make an attempt of my own.
JavaScript, CANVAS and Proxies, oh my!
First of all, I'm using a CANVAS element and the CanvasRenderingContext2D object to draw graphics in the browser's window.
I'm also using a Uint8ClampedArray to store the 6912 bytes of raw video RAM. For a more detailed description of the memory
layout, scroll down a little. Each byte of the array corresponds exactly to one byte of ZX Spectrum RAM,
so changing the contents of a single byte should trigger a redrawing of at least a part of the canvas.
I decided to redraw the canvas in blocks of 8×8 pixels, because this is close to how the ZX Spectrum ULA worked. Changing any
one of the 8 bitmap bytes inside a block, or its attribute byte, should mark that block as "dirty" and when the next animation
frame comes along, all dirty blocks should be rerendered. Because of this, there is also a Uint8Array of length 728 (32×24)
keeping track of dirty blocks, so that I don't have to redraw all blocks every frame.
Using a Proxy object, I'm able to use the array normally, while correctly marking dirty blocks as needed. Without a Proxy,
I would have to expose a setter method for changing the RAM contents.
// Without a Proxy object:data.set(address, newValue);
// With a Proxy object:data[address] = newValue;
The Uint8ClampedArray and Proxy construction looks like this:
vardata = newUint8ClampedArray(6912);
vardataProxy = newProxy(data, {
"set" : function(target, property, value, receiver) {
if (property >= 0 && property < 6912) {
data[property] = value;
vardirtBlockIndex;
if (property >= 6144) {
// The index is inside the attribute sectiondirtBlockIndex = property - 6144;
} else {
// The index is inside the bitmap sectiondirtBlockIndex = blockIndexFromOffset(property);
}
dirtyBlocks[dirtBlockIndex] = 1;
returntrue;
}
// Not a numeric index inside the boundariesreturnfalse;
}
});
This creates a Proxy that, when written to, sets the value of the hidden array, calculates what block is changed, and marks
that block as dirty, so that the next call to the renderer only redraws the dirty blocks. This speeds up the rendering process
a lot.
The ZX.Spectrum.Bitmap object exposes the following public functions:
poke(address, value): Changes one byte of video RAM (valid adresses are within the 16384..23295 range)
peek(address): Reads one byte of video RAM
ink(value): Sets the current INK colour (0..7)
paper(value): Sets the current PAPER colour (0..7)
bright(value): Sets the current BRIGHT value (0..1)
flash(value): Sets the current FLASH colour (0..1)
cls(): Clears the screen using the current settings
plot(x, y): Sets one pixel, affecting the colour block
unplot(x, y): Clears one pixel, affecting the colour block
line(x1, y1, x2, y2): Draws a one pixel line, affecting colour blocks
The ZX Spectrum was an amazing computer for its time. An advanced BASIC interpreter fit snugly into 16 kilobytes of ROM, and
the 48 kilobytes of RAM included 6.75 kilobytes of graphics memory. Using BASIC commands like PLOT, INK and CIRCLE, you
could write algorithms to draw things of beauty on the screen, but you had to look out for attribute clash.
The video RAM consisted of monochrome bitmap data containing one bit per pixel for a total of 256×192=49152 bits, fitting into
49152/8=6144 bytes, starting at address 16384. The order of pixel rows inside this memory area is a little strange, as rows are
not placed linearly (each line of 256 pixels is not exactly 256 bits after the one above it). To calculate the screen address
of the first pixel of a Y coordinate, you encode the address as follows:
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
0
0
1
0
Y7
Y6
Y2
Y1
Y0
Y5
Y4
Y3
0
0
0
0
0
This effectively divided the screen vertically into three blocks of 256×64 pixels, within which it is easy to get to the next
line of characters, and also easy to get to the next line within a character block by simply adding one to the high byte of the
address, but calculating the screen position from a pixel coordinate is really convoluted.
Directly after that monochrome bitmap, at address 22528, was one attribute byte per 8×8 block, containing the colour values for
the "ones" and "zeros" of the bitmap data. Each attribute byte is encoded like this:
7
6
5
4
3
2
1
0
F
B
P2
P1
P0
I2
I1
I0
F holds a one for FLASH mode, where INK and PAPER alternates every 32 frames
B holds a one for BRIGHT mode, where both INK and PAPER are a little brighter
P0..P2 holds a value between 0 and 7 for the PAPER colour, which is used for zeroes in the bitmap
I0..I2 holds a value between 0 and 7 for the INK colour, which is used for ones in the bitmap
Avoiding the "attribute clash" was tricky, and you had to really plan your artwork or your graphics algorithm to make sure that
you only ever needed two distinct colours inside each 8×8 block of pixels.
Artist Mark Schofield wrote an article in 2011,
describing his process of planning and creating a piece of ZX Spectrum artwork.
If you are looking for more ZX Spectrum art, here are a couple of sites you might have a look at:
Wow, it's been a really long time since I wrote anything about my demo projects here. I apologize for that, but the good news is that I have finally found some extra time to spend on "pleasure programming" again.
First, there's the unfinished business of Artsy (and slightly insane). About five months ago, I was finished with the first two parts, and I haven't had time to start on part three until now. So far, it looks pretty good. Unfortunately it doesn't yet work on iOS devices, but feel free to check it out:
Someone in the CODEF Facebook group linked to the SoundBox tool, and I find it quite amazing. It's a JavaScript-based chip tune tracker, with a really nice and small playback routine. I have been experimenting a bit with it, and here are some results:
I just made the Artsy repository public. You can check out all my commits from the first boilerplate code to the current two-part demo beta on github at /lbrtw/artsy.
In a couple of weeks I will continue adding the third part of the demo. Stay tuned! :)
Artsy (and slightly insane), first two parts now in beta
A couple of minutes ago, I uploaded a new version of Artsy (and slightly insane). It includes JavaScript remakes of the first two parts of the iconic Amiga
Demo Arte by Sanity.
The code is pure JavaScript and Canvas. There is no Flash, no Silverlight, no WebGL stuff, and there are no frameworks involved. When I'm done remaking the third and final part of Arte, my plans are to release the full source code for the demo. Also, I'll take the "plumbing" parts of the code and release as a JavaScript demo framework in itself, and I'll open-source it.
But for now, enjoy the first two parts of "Artsy (and slightly insane)". The adress is demo.atornblad.se/artsy.
I remember seeing the "TV Cube" part of Enigma for the first time – and not really being able to figure out how it was made. Heck, I couldn't even do the math for a
proper backface culling, so back in the 1990s my occational 3D objects were never any
good. So the thought of making 2D and 3D objects appear on the surfaces of another 3D object was way beyond my understanding of math.
Once again, I am aware that the prettier way of doing this is by manipulating a transformation matrix to rotate, translate and project coordinates from different branches of a hierarchical coordinate system. But I ignored that and rolled it all by hand.
Star field
The stars on the front of the cube might look as if there is some depth, but that's just an illusion. Each star has an (X,Y) coordinate, and a third constant (which I called Z) that
governs speed along the X axis and also the alpha component of its color. The lower the speed, the dimmer the light. When observed face on, it gives the impression of a 3D space, but it's really just a form of parallax scroller.
Pseudo code
// Star fieldfor (varstar, i = 0; star = stars[i++];) {
// Move the star a bit to the "right"star.x += (star.z * star.z * speedConstant);
// Limit x to (-1 .. 1)if (star.x > 1) star.x -= 2;
// Left out: Project the star's coordinates to screen coordinatesvarscreenCoords = ( /* left out */ );
// Draw the star, using Z to determine alpha and sizecontext.fillStyle = "rgba(255,255,255," + (star.z * star.z).toFixed(3) + ")";
context.fillRect(screenCoords.x, screenCoords.2, star.z * 2, star.z * 2);
}
Hidden line vector
Back in the days, I could never do a proper hidden line vector, because I didn't know how to properly cull back-facing polygons. For the Phenomenal & Enigmatic "TV Cube" part, I
arranged all polygons in the hidden line pyramid so that when facing the camera, each polygon is to be drawn clockwise. That way I could use a very simple algorithm to determine each
polygon's winding order.
I found one really efficient algorithm on StackOverflow, and I
learned that since all five polygons are convex (triangles cannot be concave, and the only quadrangle is a true square), it's really enough to only check the first three coordinates,
even for the quadrangle.
Rotating the pyramid in 3D space was exactly the same as with the intro part of the demo, and after all coordinates are
rotated, I simple use the polygon winding order algorithm to perform backface culling, then drawing all polygons' outlines. Voilá, a hidden line vector.
Pseudo code
///Hidden line vector// Pointsvarpoints = [
{ x : -40, y : -40, z : 70 }, // Four corners at the bottom
{ x : 40, y : -40, z : 70 },
{ x : 40, y : 40, z : 70 },
{ x : -40, y : 40, z : 70 },
{ x : 0, y : 0, z : -70 } // And finally the top
];
// Each polygon is just an array of point indicesvarpolygons = [
[0, 4, 3], // Four triangle sides
[1, 4, 0],
[2, 4, 1],
[3, 4, 2],
[3, 2, 1, 0] // And a quadrangle bottom
];
// First rotate the points in space and project to screen coordinatesvarscreenCoords = [];
for (varpoint, i = 0; point = points[i++];) {
screenCoords.push(rotateAndProject(point)); // rotateAndProject is left out
}
// Then go through each polygon and draw those facing forwardfor (varpolygon, i = 0; polygon = polygons[i++];) {
varedgeSum = 0;
for (varj = 0; j < 3; ++j) {
varpointIndex = polygon[j];
varpointIndex2 = polygon[(j + 1) % 3];
varpoint = screenCoords[pointIndex];
varpoint2 = screenCoords[pointIndex2];
edgeSum += (point2.x - point.x) * (point2.y + point.y);
}
if (edgeSum < 0) {
// This polygon is facing the camera// Left out: Draw the polygon using screenCoords, context.moveTo and context.lineTo
}
}
Plane vector
The plane vector is super-simple. Just rotating a plane around its center and then using the code already in place to project it to screen coordinates.
Projection
The function responsible for translating coordinates in the 3D space to screen coordinates is not particularly complex, since it's basically the exact same thing as
for the intro part of the demo. Also, to determine which faces of the cube that are facing the camera, I just
used the same backface culling algorithm as for the hidden line vector. I was really pleased with the end result.
Artsy (and slightly insane), first part now in beta
In between writing about the Phenomenal & Enigmatic JavaScript demo, I'm also doing a JavaScript remake of
the Arte demo by Sanity from 1993. The first effect I made was the "disc tunnel" effect,
seen 2min 9sec into the YouTube clip, and the entire first part is now live, but still in beta.
The address is demo.atornblad.se/artsy, but I haven't tested it that many browsers and devices yet. I do know that
it crashes on Windows Phone 7.8 after just a few scenes, but it works really nicely on my iPad 3, especially in fullscreen mode. Add a shortcut to
your iPad's start screen for fullscreen mode.
I will make some changes to the bitmap tunnel effect, and make sure that the demo runs correctly on most browsers and devices. Also, stay tuned for parts 2 and 3
of Sanity Arte, and of course there will be a blow-by-blow description here on atornblad.se when the whole thing is complete.
When I made Phenomenal & Enigmatic, I didn't want to reuse any of the graphics art from the original demo. I had
decided to take the music and some sense of the overall demo design, but didn't want to infringe on the original graphics artist's creativity, so for
the Enigmatic logo, I turned to rendering the logo using code.
One popular technique of creating good-looking logos back in the Amiga days was to first draw the logo flat, then duplicating it in a new layer a few pixels off, making that new layer translucent. Then one would paint in side surfaces and edge lines with varying levels of opacity. After adding some final touches, like surface textures or lens flares, the end result would be a glossy, glassy look, much like the original Enigma logo by Uno of Scoopex.
The logo scene uses the same technique, painting the front, back and sides of the word ENIGMATIC as filled polygons with slightly different colors and opacity levels. During the scene, I animate some of the transformation vectors for effect. Of course, the original artwork by Uno is much better in exactly every way, but it was a fun exercise.
Pseudocode
// Logo rendererfunctiontransformLogoCoord(chapterId, time, x, y) {
// Left out: Perform a simple coordinate transformationreturn { x : transformedX, y : transformedY };
}
functionlogoMoveTo(chapterId, time, x, y, xOffset, yOffset) {
varcoords = transformLogoCoords(chapterId, time, x, y);
context.moveTo(coords.x, coords.y);
}
functionlogoLineTo(chapterId, time, x, y, xOffset, yOffset) {
varcoords = transformLogoCoords(chapterId, time, x, y);
context.lineTo(coords.x, coords.y);
}
functionrenderLogo(chapterId, time) {
varxOffset, yOffset;
// Left out: Calculate xOffset and yOffset from the chapterId and time values// Draw bottom surfacescontext.beginPath();
for (varblock, i = 0; block = logoPolygons[i++];) {
logoMoveTo(chapterId, time, block[0].x, block[0].y, 0, 0);
for (varcoord, j = 1; j = block[j++];) {
logoLineTo(chapterId, time, coord.x, coord.y, 0, 0);
}
logoLineTo(chapterId, time, block[0].x, block[0].y, 0, 0);
}
context.closePath();
context.fill();
// Left out: draw side surfaces// Left out: draw top surfaces
}
The opening scene of Enigma by Phenomena starts out looking like an average side-scrolling
star parallax, which was very normal in 1991. Nice way to lower people's expectations. :) But after just a couple of seconds, the stars begin
twisting and turning in space around all three axes.
Back in 1991 I knew how to rotate a plane around the origo, by simply applying
the angle sum identities of Sine and Cosine. I
also realized that rotating any 3D coordinate in space could by done by simply the case of rotating around more than one axis, one axis a time.
In the Phenomenal & Enigmatic demo, I only rotate the stars around two axes. First I rotate
the (x,z) components around the Y axis to get (x',z'), and then the (y,z') components around the X axis to get (y',z''). I also translate
each star along the X axis before the rotation takes place. To finally get the 2D screen coordinates of the 3D space coordinate, I take the (x',y') coordinate
and multiply by (2/(2+z'')) for a pseudo-distance feel. The z'' value controls both the color alpha component, and the size of the rectangle being drawn.
The even better way of doing this is through vector addition and multiplication, but I'm sticking to the math that I know. :) After all this math is in place, the trick is to change the offset and rotation variables in a nice way.
Rendering text is just a couple of calls to the context.fillText method and animating the value of context.globalAlpha.
Pseudo code
// Prefetch sin and cosine of anglesvarcosY = Math.cos(yAngle);
varsinY = Math.sin(yAngle);
varcosX = Math.cos(xAngle);
varsinX = Math.sin(xAngle);
for (varstar, i = 0; star = stars[i++]; ) {
// Fetch x, y, z and translate xvarx = star.x + xOffset;
vary = star.y;
varz = star.z;
// Limit x to [-1 .. 1]while (x > 1) x -= 2;
while (x < -1) x += 2;
// Rotate (x, z) around Y axisvarx2 = x * cosY + z * sinY; // x'varz2 = z * cosY - x * sinY; // z'// Rotate (y, z') around X axisvary2 = y * cosX + z2 * sinX; // y'varz3 = z2 * cosX - y * sinX; // z''// Transform to screen coordinatesvarscreenX = x2 * 2 / (2 + z3) * halfScreenWidth + halfScreenWidth;
varscreenY = y2 * 2 / (2 + z3) * halfScreenWidth + halfScreenHeight;
// Draw the starcontext.fillRect(screenX, screenY, 2 - z3, 2 - z3);
}
Twenty-two years ago, the Amiga demo scene was extremely active. The boundaries of the little 16-bit miracle machine were stretched a little bit more for each
new release, and a special breed of programmers was created. We loved efficient algorithms, we enjoyed squeezing out as much as possible from every CPU clock
cycle, and we really liked showing off to each other.
Back then, I was a decent 68000 assembler programmer, but nowhere near being among the greatest. I knew my way around the Copper and the Blitter, I knew how
trigonometry and vector mathematics worked for creating 3D worlds, and I understood that the "shadebobs" effect on the Amiga was nothing more than repeated
full-adders, using the "Fat Agnus" chip's dedicated memory block manipulation instruction set.
My favorite demo from 1991 was Enigma by Phenomena, programmed by Olof "Azatoth" Lindroth, with music by the amazing Jimmy "Firefox" Fredriksson and Robert "Tip" Österbergh. The combination of music and direction with some really good programming set a new standard for demos on the Amiga.
First attempt
About four years ago, I started replicating the Enigma demo in C# and Silverlight 2, just as a side-project. I got as far as the opening scene and
the "TV Cube" effect, which I must say I really nailed! But then I grew tired of the effort, and put the whole project aside. It just wasn't rewarding
enough, but I did re-awaken some of my old "hacking the bare metal" programming skills.
Come 2013
For the last couple of weeks, I've been working from scratch, exploring what is possible using just a HTML5 AUDIO element, a CANVAS element, and a truck-load
of JavaScript. Instead of trying to recreate the exact Enigma experience, I "borrowed" the amazing music, and did something of my own, inspired by Enigma.
I'll write a bit about each scene in the following weeks, but for now you're welcome to check out the fruit of my effort at demo.atornblad.se/enigmatic