Archive for the ‘Tech’ Category

In The Cards

Sunday, March 6th, 2016

Go and get a deck of cards, shuffle it, and deal it out in the order you shuffled it. You may find this hard to believe, but it’s extraordinarily likely that no one ever has shuffled cards into the same order you just did.

random cards

You’ve probably heard this before, since it’s one of those quite interesting facts that does the rounds. But in case you haven’t, the reason is that the amount of different ways a 52 card deck can be ordered is astonishingly high. The number is so big it’s difficult to parse:

8065817517094387857166063685640376
6975289505440883277824000000000000

Yes that’s one number written across two lines. To get an idea how large this is, it’s significantly higher than the number of stars in the universe, the number of atoms in your body, and the number of seconds that have elapsed since the universe was created in the big bang.

I’ll take it one step further. Depending on who you ask, between 60 and 108 billion humans have ever lived, so we can use an average of 80 billion. Applying the Doomsday Argument to this average suggests that about 1.2 trillion humans will ever live (and there’s a blog post on that topic itself!).

So in the entirety of human history, if every human ever lived to an average of 70 years and spent every single second of their lives shuffling cards the entire output of humanity would only correspond to a miniature subset of the total possible permutations of a 52 card deck (3.3E-45% to be accurate).

So shuffle that deck, deal it out, and be impressed with a creation that only you have made.

intellivision

This has an interesting relevance in the field of gambling, which requires randomized deals lest the player guess the card order. Using human dealers, decks can be shuffled in a way that makes them almost completely random (although studies have shown that a virgin deck must be shuffled anywhere from 4 to 7 times to eliminate the order inherent in the way it was packaged). But these days the vast majority of deck shuffling is done by computers, and it’s not trivial to make computers do things truly randomly.

The very first computer games that included card shuffling had extremely primitive random number generation and could only return limited unique decks. Random number generators require a ‘seed’ (ie. a start value upon which all others are based) and every sequence based on the same seed is identical. Games on the Atari 2600 and Intellivision (shown above) typically used hardware values or player input (such as the number of frame refreshes that occurred before the player pushed the start button) as seeds, but even then were limited to usually only a couple of hundred unique decks. Given enough time and effort therefore, you could know the entire order of cards based upon the first few dealt.

As time moved on the algorithms became more sophisticated, and so too did the random number generators, but even then it was possible to predict deck orders if you had enough information. In 1999 an online casino, in an attempt to demonstrate their games were not rigged, actually posted their RNG code online. Someone got it, worked out how they seeded (based on the clock time, as I did in my polycap simulation), and actually wrote their own code that was able to reproduce perfectly the shuffling of the games they were playing online.

So we get to today, where RNG’s use very creative ideas to seed themselves with truly random seeds (such as using code to convert video frames captured from random Youtube videos or 1 second of white noise from a radio into seeds). But there is still a problem in that the range of randomly generated numbers is still limited to about 4E38. In short, you can’t generate a number between 1 and 8.06E67, which means you cannot generate one number for each possible deck permutation.

There are ways around this (hint: using only a single coin you can generate two random values) but it makes the task of writing a deck shuffling simulator that can account for every possibly permutation non trivial.

I think.

vp

So as a result of this thought experiment, Bernard’s going to do it! Here’s my design document:

1) Assign all 52 cards a random number
2) Sort them
3) Output shuffled deck

It’s trivial stuff, and should only take him a femtosecond or two to implement. But the true fun is in the testing! For what I’m really interested in is how many unique shuffles are completed before a repeat occurs. Therefore the output (deck order) will need to be saved as well as the time it takes for each shuffle to be completed. Plus, since 52! is insanely large (the world will end before his computer shuffles that many times) I’d say saving the first 15 cards + the time the shuffle occurred is sufficient to do some statistical analysis.

So there you go Bernard, there’s your challenge. Write the code, run it 1.3 trillion times*/**, save the first 15 cards in each deck and the time the shuffle was performed and then analyze it to see if any sequence repeated.

Let us know the results 🙂

* I’ll assume you have a modern Pentium running about 100k MIPS, and that this code requires maybe 1000 operations to execute (a big guess there; the sort could take many more), which means about 12000 seconds or 3.5 hours per experiment. However writing results will slow it down a lot I suspect. Good luck!

** A very rough mental calculation tells me this may be a file size in the order  or 17Tb. I hope you have a lot of space! Even more luck to you sir!!

Pong

Friday, November 27th, 2015

For my next electronics kit challenge, I made this:

IMG_2890

Yes, a TV Pong kit, in component form. It was cheap (about ~$10) and looked easy to assemble. Could it be any good?

Here it is ready to be put together:

IMG_2891

Not a bad amount of pieces. The PCB is very clean and easy to solder onto, and compared to some other kits I have made this one was extremely easy to assemble. All told, it only took about an hour. Here’s the finished product:

IMG_2913

What you can’t see: my impressive soldering 🙂

But the true test was to come. Skeptically, I connected it to my ‘TV that exists just for old game consoles” and turned it on…

The ball moves so quickly it’s almost impossible to hit it, even in the 1P mode. But whether it is playable is immaterial: it worked first go!

Even if you’re not impressed by that, I was 🙂

JPEG Musings

Tuesday, November 3rd, 2015

Warning: This post is dense with extreme nerdery. Read at your own risk!

I occasionally send my brother quiz questions via text, and as part of a ‘programmers quiz’ I asked him the following last week: “What is the theoretical smallest possible size for a 1024 x 768 jpg image file?”  As befits his ‘coding maestro’ status I expected – and received –  a quick reply: 1024 bytes. I immediately knew he was guessing, and – sadly – incorrect.

The trouble is I didn’t know the answer. Here right now I’ll attempt to work it out.

Bit of a backstory here: over a decade ago one of the first bits of coding work I did at school was write a piece of software to convert .tiff image files into Excel spreadsheets. It was a bit nifty I believed, especially as the .tiff format was a bit messy and it was my first time running up against the rather hoggish memory requirements of very large Excel files (to which I had to develop a workaround). When I did that I developed a bit of an academic interest in the structure of image files, so my question to B didn’t just come from nowhere.

Starting research into this problem led quickly to the discovery that there are quite a few different types of JPEG file, each with slightly different header structures…

Actually lets go back a bit. Only a portion (admittedly the largest portion) an image file corresponds to the actual image itself. The rest is markers – pieces of information that detail the file type, the file structure and the contents. Think of them like the cover and table of contents of a book, where the actual story (in the book) corresponds to the image data.

In terms of image files, JPEG (for Joint Photographic Experts Group) refers to a technique of compressing data to store information in a smaller size. The mechanics of the compression itself are interesting, and can be thought of as reproducing the data using pre-determined blocks to minimize space. Think of rebuilding a house in lego. It will look basically the same, especially from a distance, but up close you’ll notice all sorts of differences. JPEG compression has the same effect on data. This is problematic for important data (such as executables) but ok for images, which is why JPEG has become the image file standard.

There are a few related versions of JPEG files, which can be grouped into two broad groups: JFIF and EXIF. The difference is in the markers and the exact compression techniques. You could spend ages studying the details, but here I’m just trying to create the smallest file possible so the answer is to go with a JFIF file because they take fewer bytes.

So my question becomes: “What is the theoretical smallest possible size for a 1024 x 768 JPEG/JFIF image file?

Let’s methodically go over the barest minimum requirements:
– 2 bytes are required for the standard JFIF header (FF D8)
– 18 bytes are required for the mandatory APP0 marker segment. The actual size is 18+3n bytes where n is the dimension of the (square) thumbnail, but since size matters here I’ll skip the thumbnail (ie. n can equal 0).
– 2 bytes are required to mark the start of image data
– N bytes are then dedicated to the image data
– 2 bytes are required to end the image data

Presumably the above is enough. So it seems the answer so far is “24 + N bytes” where N are the bytes required for the compressed image data.

So how to calculate N? Well the compression method itself is deliciously complex, and requires rebuilding the data using only the following 64 8×8 blocks:

Dctjpeg

Since the file would be a monochrome (to reduce the required number of blocks) file of dimension 1024 x 768, then it would require (1024/8)x(768/8) or 128 x 96 blocks at maximum compressed. Obviously the block used would be the solid one at top right, but the question is can JPEG compress even further than simply saying (in code): “96 rows of 128 solid blocks in a row”? One would imagine yes.

Putting that aside for a moment, the following 134 byte file is (apparently) the smallest possible JPEG image file (a 1 x 1 image of a single grey pixel; obtained from a few places online):

FF D8 FF E0 00 10 4A 46 49 46 00 01 01 01 00 48 00 48 00 00
FF DB 00 43 00 FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF
FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF
FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF
FF FF FF FF FF FF FF FF FF FF C2 00 0B 08 00 01 00 01 01 01
11 00 FF C4 00 14 10 01 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 FF DA 00 08 01 01 00 01 3F 10

Now if a 1024 x 768 file used 12288 (solid) blocks to recreate the image (at one byte each) and then the 24 required marker bytes are added you’ve then got a 12312 byte file. This is about 92 times the above, which isn’t bad for an image 786432 times larger. But more bytes must be required, since according to my quick calculations the 1 x 1 file would only need 25 bytes, but instead uses 134. This is mostly due to the inclusion of the non-mandatory but always apparently used DQT, SOF and DAC markers that index the compression and quantization used. So the extra 109 bytes should probably be included as an approximation to the larger file of the original problem.

This gives a filesize of about 12397 bytes, or about 12.1k.

And yet. And yet this must be wrong. For instance the values 1024 and 768 can be stored in 2 bytes each, the colour (black?) another byte, and the block used another. Couldn’t the data be compressed to only 6 bytes? Obviously not, based on the 1 x 1 file. And obviously not, based on these:

black blue

Both of them are solid 1024 x 768 JPEG files. The one on the left is 12.3k (according to Mac OS 10.9.5), the one on the right is 12.9 k. Note that the first example is very close to my estimate.

black2

Thats a solid black JPEG I created myself. It’s 16k according to this computer, which tells me Apple adds a bunch of unnecessary marker information to the file (which is common).

So based on my quick calculations, a few educated guesses, and some examples, I’m going to state that the answer to the question is somewhere between 12 and 13k.

At this point you may be tempted to question the utility of JPEG as a technique to compress simple files like these. Consider the extreme example of a book that contained just the word ‘horse’ 50,000 times. Such a book would run to 200 pages long on average. But you could ‘compress’ it to a single sentence that simply read “Horse fifty-thousand times.” and save a lot of paper. My argument here is that 12288 bytes is an awful lot for something that could be compressed down to only 6 bytes using my method above.

I don’t know the answer except to speculate that JPEG is a compression technique useful for the vast majority of image files. As with all things though, there are exceptions, and if the standard is evaluated solely in terms of it’s ability to reduce the file size, then the gains for degenerate examples (such as these) may not be as great as using another standard (such as GIF).

As a final note, reading about the JPEG format as I have this past week has convinced me of two things:
– The actual mathematics behind the transformations of the data to compress it is fascinating.
– Writing code to actually transform a 1024 x 768 solid block of color and generate the compressed data would be akin to reinventing the wheel 🙂

I hope this answer to an idle question is enough for my brother!

Black Celebration

Sunday, November 1st, 2015

Halloween was a bust. We bought this much candy: 

 
I even turned into a zombie: 

 
But we got almost no kids! 

However all was not lost, since I bought a black light! Much fun has been had looking around the house under UV light. Here’s a selection of example photos:

  
The most notable item on our fridge is our glow-in-the-dark Baby Jesus (upper right). This is hardly surprising, since it’s made of a material designed to fluoresce. 

 
An almost forgotten wall sticker hidden behind a door reveals a dramatic (and appropriate) surprise under UV light! 

 
The sticker wall on a bookshelf, showing not all whites are the same when blacked! 

 
Check out that Gundam plastic runner on the table. This isn’t even one of the UV sensitive Gundam kits 🙂 

 
Keimi the mermaid (from One Piece) has a secret! Speaking of figurines… 

 
Power Girl looks very different doesn’t she?

For an item that cost me a tenner at Wal-Mart (and is already breaking) this is a lot of fun! 

 
Happy Halloween 😉

Things We Saw At The Computerspielemuseum

Tuesday, August 25th, 2015

A couple of weeks ago (it’s been that long?!) K, B and I visited a museum in Berlin dedicated to Computer Games. It was small, but it was very good, and perhaps even the best of the few I’ve visited over the years. Here’s a random selection of things we saw in the museum…

In the earliest part of the museum they had the landmarks of pre-computer gaming, such as (very) 1st edition Dungeons and Dragons:

IMG_0595

And the first gamebook every written, Sugarcane Island (written 1969, published 1976):

IMG_0615

They had holy grails of the Golden Age:

IMG_0606 IMG_0611

Crazy game art from the 1980s:

IMG_0612

A small but good condition arcade:

IMG_0640 IMG_0609

A well-done series of rooms decorated to resemble certain ages of gaming. Here’s Bernard in the 1980’s attic room (presumably a typical German household attic from that era):

IMG_0621

They also had Germany’s own homegrown console from the early 1980s. Only about 40 games were ever released:

IMG_0613

You could design your own sprites:

IMG_0605

You could post with Lara Croft(s):

IMG_0600

Or you could look ridiculous playing Atari Ms Pac-Man using a titanic joystick:

IMG_0630

And you could even risk your life playing the Painstation:

IMG_0617

This is a massive two-player Pong game where the players are penalized for mistake in the form of heat, electric shocks or whips to the hands (see details here). We watched two people play it and as the game progressed they certainly seemed to be feeling the pain. I would have played it, but my compatriots were hesitant 🙂

As I said, a small museum but a goodie. If it wasn’t hot and we weren’t already overcome by ruination, I would have liked to have spent hours there reading all the information. Recommended if you’re in Berlin.