If you have watched “The Theory of Everything”, you know it is a biography movie of Stephen Hawking and his journey to discover a formula which could explain every single phenomena in the universe.
In the movie, Hawking believes that the universe work based on certain rule or formula that human doesn’t know yet due to limitation of observable phenomena and every known formula is just small part and could be used to approximate “The theory of Everything”. From my perspective, this belief could explain how machine learning actually work ideally which is by approximating formula (the theory of everything) that could explain every single output for every input (universe) by deriving formula (small part of formula) from our dataset (observable phenomena).
Professor Stephen Hawking, widely regarded as the greatest physicist of our times died peacefully on Wednesday 14 March, his family told the media. He was 76. Although he never won the Nobel Prize, Hawking was recognized with most other awards in physics for which he was eligible. His impact was even wider in the public imagination. Challenged to name a scientist, until his death, Hawking was usually the only name non-scientists could come up with alongside Einstein, Darwin or Newton.
In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential “pitfalls”: artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which cannot be controlled. The four-paragraph letter, titled “Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter”,
Hawking concerned that people could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes – and that such powerful systems would threaten humanity. Are such dystopic outcomes possible? If so, how might these situations arise?
So what about 4.0? Said Hawking “human intelligence is the ability to adapt to change”.