We often say there is just too much out there. You can’t even possibly see everything that this universe has to offer in a million year or even a billion year.
Consider an image with resolution of 640X420 (i.e. better than standard TVs and pretty close to DVD) and RGB color depth of 0-255. Although not the best, this is perhaps a good approximation for the human eye. With this setup there are actually only 4457 billion images possible.

I can get sucked in to challenges very easily especially when that involves Artificial Intelligence or statistical analysis. The challenge that has occupied my interests these days is the one that was put up by Netflix. It’s easy to describe: They give you 100 million data points for a triplet (Customer, Movie, Rating) and you have to predict the rating for given (customer, movie) pairs. If the average of squared errors of your predictions is below certain value you get a million dollar prize.

I’d been attempting to post this entry right from inside Word 2007 Beta 2, but no luck so far interfacing with dasBlog. But the good thing is that most Office 2007 apps seems to be now blog and RSS aware. You can manage your blogging accounts right inside the Word or OneNote, although few things like pinging Technorati is missing.
The coolest thing in Office 12 is, though, new math related functionality.

There is lots of intersections on my thoughts on randomness with Chaitin’s. I think these papers would be extremely interesting. I think it could be significant advancement to my Minimal Instruction Set and Program Complexity theory I’d been toying with many years:G J Chaitin Papers

UPDATE: Check out my article on detailed HowTo for this topic.
A weekend worth of effort has paid off so I can finally write about this equation. For those who wants to write about mathematics in their blogs knows what I’m talking about. Quite ironically, there is no built-in support for writing math equations in HTML. Of all types of knowledges, mathematics is something that remains invariant over time, cultures, languages.

A famous quote from John Von Neumann goes like this,
Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin.
This is something I’ve intuitively believed since I was 15 and even hadn’t heard of Neumann. Pure random numbers is (or probably more) as fascinating concept as or . It is impossible to generate sequence of purely random numbers without tapping in to nature.

Here’s a mathematical algorithmic puzzle for all you bright challange-seeking minds:
I’ve my own big business contact list and so does my friends. One day we decide to call up with each other on phone (only 2 people on one line), talk to each other and have our lists sync’ed up with each other. That is to say, we add any contacts we didn’t had already, update any outdated ones and delete anyone who has gone out of business.

These are the two most interesting subjects for me right now. This book on Neural Networks is probably the best to take a dive in the field. This is truly the magnificent book that you can read like some thriller story (assuming you are not afraid of some mathematical depth) and look at the real working through cool numerical examples. Short but concise explanations might make this book deceptively tiny but it’s sure the best introduction in this field.

I read about Bayesian probability first on Paul Graham’ milestone article on spam two years ago. Amazingly, his Bayesian based algorithm did the same thing that a sophisticated AI algorithm will do, i.e. to precisely identify spam emails (success rate: 995 out of 1000) just like humans do it with their image recognition, natural language processing capabilities and yet unparalleled intelligence. That got me interested and Bayesian probabilities got added in my things to learn.

© Shital Shah. All rights reserved. · Powered by the Academic theme for Hugo.