
;
; hugi 27 compo entry
; by vulture a.k.a. Sean Stanek <vulture@cs.iastate.edu>
; 733 bytes, horribly unoptimized asm code, and a lot of 32-bit prefixes I might not need, but I got some cool compression algo anyway :(
;

general algorithm:
  0. reorganize chars/colors and remove some text stuff...1. tokenize
  1. tokenize 2-byte chars to 1-byte tokens
  2. first order predictive arithmetic compressor   -->  404 bytes with no mtf/bwt/rle

space costs are:
  1. arithmetic compressor output (404 bytes)
  2. tokenized table lookup (70 bytes)
  3. extra string data (49 bytes)
  4. code (~210 bytes)


to compress:
  1. x.c to process image.bin to image2.bin
  2. hex edit to process image2.bin to image3.bin (just removed text stuff by hand and filled in with surrounding backgrounds)
  3. 1-tokenize.c to convert image3.bin to out.1 (new data file) and out.1a (token lookup file)
  4. ad1.c on out.1 to convert out.1 to out.ad1 (compressed data file)
  5. ad1rev.c on out.ad1 to reverse data order to save some space in decompress code

comments:
  - mke.bat for tasm assemble
  - un1.c should verify compress/decompress
  - I store datastream in reverse order, start data pointer at end and decrement as range_code needs to be updated, as this saves some space!
  - unfortunately, it seems like we need 32-bit precision (24-bit didnt seem to be sufficient :()
      ... so I used a lot of 32-bit prefixes and the assembly code is horribly unoptimized
  - still got some cool tricks in here
  - could probably contend for top 5 slots if I could do it without the 32-bit precision necessary :(  ... or maybe use FPU
  - included 235987235987235 other files of other algorithms and ideas I tried
