I had begun to despair myself about the seemingly endless spate of nonsensical writing about the risks from AI and all-powerful yet allegedly inscrutable “algorithms” and found myself wondering if there was any hope for technology analysis to rescue itself from the overwhelming weight of its own stupidity. Sadly so much of writing about human and machine agency is anything but intelligent. I do not exaggerate when I say that its overwhelming lack of intelligence makes even a Roomba look like superintelligent in comparison.
Read the article…
Jeremy Howard, CEO of Enlitic, is exploring these capabilities for medical applications. He was an early adopter of neural-network and big-data methodologies in the 1990s. As the president & chief scientist of Kaggle, a platform for data science competitions, he witnessed the rise of an algorithmic method called “deep learning”.
In a recent interview he describes that in 2012 deep neural networks started becoming good at things that previously only humans were able to do, particularly at understanding the content of images. Image recognition may sound like a fairly niche application, but when you think about it, it is actually critical. Computers before were blind. Today they are now more accurate and much faster at recognising objects in pictures than humans are.
He explains the difference between humans and machines is that once you create a specific module, you don’t have to have every machine learn it. We can download these networks between machines, but we can’t download knowledge between brains. This is a huge benefit of deep-learning machines that we refer to as “transfer learning”. The only thing holding back he states the growth of machine learning is 1) data access, and 2) the ability to do logic.