Computers have become very efficient at producing vast quantities of data but for modern computing and business systems, the challenge is to create computers which can ‘see’ and ‘process’ data in a way more akin to the human brain.
Kwabena Boahen, a researcher at Stanford university has spent many years understanding ‘how’ the brain thinks. His research describes the efficient network of interconnected neurons in the brain, and likens in to our emerging ‘parallel’ and ‘distributed’ computing architectures (similar to how the internet itself works as a ‘system’). These systems (unlike “old computing”) are fluid, dynamic and robust, able to survive with imperfect data. His research is illustrated using an ‘electronic retina’ which is a system of transistors and connections which model the human eye. The results are startling, showing that 90% of ‘sight’ data reaching the brain is simply about contrast and movement. This fact allows the brain to quickly take in and analyse the massive amount of sight data it has to process to give us a clear understanding of our world (this same phenomenon is used in video streaming where computers are only sent the bits of an image which have ‘changed’, thus saving a lot of file space).
Parallel and distributed systems are quickly moving from the realms of ‘macro’ and corporate computing, and into business and consumer environments, meaning that at our fingertips, we now have access to systems which can efficiently manage data in a manner akin to the brain. This takes us neatly into our next important concept…
At an early age, we, as humans learn the art of data organisation. We learn that social networks can be organised into “cliques” and that “words can fit into overlapping categories” (eg: dog, mammal, animal).
In a significant advance in artificial intelligence, scientists at MIT have developed a model which can recognise these patterns, and organise them into the most appropriate data structures such as trees (common in genealogy), linear orders, rings, dominance hierarchies, clusters and more.
"Instead of looking for a particular kind of structure, we came up with a broader algorithm that is able to look for all of these structures and weigh them against each other," said Josh Tenenbaum, an associate professor of brain and cognitive sciences at MIT and senior author of the paper. The model could help scientists in many fields analyze large amounts of data, and could also shed light on how the human brain discovers patterns.
Business now, more than ever, produces masses of data which our software struggles to organise in a manner which makes it easy for us to understand and process. Many providers of management information, knowledge management, and other solutions aid our ever increasing sorting and dissemination needs, but more often than not, we are left unable to take-in the volume and variety of information we are presented, leading to decisions which are conversely less accurate.
By applying the theories set out in his paper, our applications will not only be able to sort data more effectively, but present in a way which mirrors the logic our own brains use. Here are a couple of hypothetical situations:
• ABC Ltd, a market research company, could apply this logic to take quantitative results from a research study, and more efficiently understand the structure and relationship of responses.
• DEF Ltd, a large retailer, could more effectively manage a wide range of products by grouping and sorting individual lines and categories more accurately based on relationships between buying patterns and supply factors.
• GHI Ltd, a financial trading floor, could use this model to create advanced tools to understand the relationships between economic and other events, and fluctuations in their traded products (eg: currencies)
From these larger situations, we can reach the more mundane, where applications could include rapidly grouping contacts by characteristic in your address book, or combining with face recognition and other technologies to accurately sort your photos in a human manner.
These technologies are the start of a revolution in how we, as computer users, access and interact with data-sets. Behind these ‘logic’ solutions, other technologies are allowing our systems to ‘pre process’ massive amounts of data more effectively. Google’s “MapReduce” technology allows them to ‘chunk’ and ‘sort’ through incredibly large and complex data sets. Wired magazine (on July 16th 2007) detailed a staggering example of this where MapReduce was asked to, “Count every use of every word in Google Books.” (an archive spanning millions of books) before taking the output of this data, and being asked, “How often does Tolstoy mention Moscow? Paris?”. By changing the nature of how software thinks about data, and using distributed architectures to process massive amounts of information, this technology is able to answer such queries in rapid pace (within a few seconds).
Now imagine the potential for a data structuring logic architecture (as above) placed in front of such a sorting tool, to be able to understand the ‘macro’ and ‘micro’ relationships in our masses of data. In recent times, such technologies have been applied to help predict votes, air-fares, sort relevant news (in aggregators), power search tools (like Google), and more. Even the array of social networking sites apply somewhat simplistic versions of these technologies to group, sort, and display their members.
I meet many business owners who are unaware (through apathy or ignorance) about the value of the data their firms generate. Even the most ‘mundane’ pieces of information, with the right tools, can reveal valuable patterns or clues to how to improve or repair a company, and open opportunities for new business models.
The future is going to be a very exciting place, and as interfaces become increasingly intuitive, our relationship with our systems and data will fundamentally change how we think. For computing, the result is phenomenal, and summed up perfectly by phisolopher Isiah Berlin, who said, “To understand is to perceive patterns”
MIT Research Paper
Computational Cognitive Science Group (MIT)
Wired Magazine on “Sorting the world”:
Google’s Mapreduce framework:
Kwabena Boahen’s page at Stanford:
Click to read full article...