Compression for computers is somewhat like mnemonics for humans. It involves creating a formula to more easily internalize things into memory. This analogy can yield an interesting insight. If mnemonics are in fact a means of data compression for the human brain, then examining them can show what the most effective ways are to encode data.
When memory experts explain how they memorize long numbers, they emphasize the use of visualization; translating the series of numbers into an easily digestible image, such as a room with furniture. This indicates that our brains, perfected by evolution over millions of years, find images easier to remember than strings of numbers or characters. In direct opposition, computers find images to be large and memory-costly.
Are computers doing it wrong? Maybe they need a more symbolic means of encoding data than individual pixels and bits. That would be closer to how humans think; instead of "P-I-N-E T-R-E-E," for which each letter would be represented by 8 or even 16 bits in UTF, we call to mind "." What if computers used a different circuitry, closer to our chemical one, to store information? What if they used a nearly infinite, flexibly sized alphabet, instead of restricting themselves to an alphabet of two? Is such an information system feasible?
No comments:
Post a Comment