Acknowledgements – Democratizing AI to Benefit Everyone
In follow-up to my introductory articles Democratizing AI to Benefit Everyone and AI Perspectives, the focus of this article is to acknowledge, recognize, and spotlight a selection of the many people and organizations that I have referenced in my recent book Democratizing Artificial Intelligence to Benefit Everyone. I have already highlighted the United Nations, World Economic Forum, Organization for Economic Co-operation and Development (OECD), and the AI for Good Global Summit in other articles (see United Nations & Democratizing AI to Benefit Everyone; World Economic Forum and Democratizing AI to Benefit Everyone; OECD and Democratizing AI to Benefit Everyone; AI for Good and Democratizing AI to Benefit Everyone). As mentioned in the book’s acknowledgments, I would also like to thank my many friends and business colleagues over the years, of which many are scattered across the globe. They have all contributed to my life in various ways for which I am very grateful and appreciative. As part of that group of special people, a special thanks to my friends and colleagues from school and University of Stellenbosch, CSense Systems, General Electric, Jumo, Bennit.AI, Machine Intelligence Institute of Africa, Cortex Logic, and the Cortex Group. Also, much appreciation to the special people that I have interacted with, the ones that inspired me and from whom I have learned so much within the international AI community, intellectual virtual communities, the African AI community, technology hubs, as well as businesses and organizations that my companies have interacted with, and many more.
As mentioned in the book’s introduction, “although the aim of the book is to help with the drive towards democratizing AI and its applications to maximize the beneficial outcomes for humanity, a big part of the book has also been dedicated to this sense-making journey as a foundation for democratizing AI and to more accurately understand where we are heading given all the current dynamics on a global and national economic and political level as well as across ideologies, industries, and businesses. There is lot of fantastic thought leadership, information, ideas, and research out there that we can tap into and benefit from if we can properly synthesize the material, make sense of it, be clear about what we want to achieve, plan properly, collaborate and then execute. This book therefore also acts as a filter on those thoughts, information, ideas, and research to enable as many people as possible to not only interpret and make sense of this, but also participate in helping shape a better future for ourselves, our children and humanity going forward. It also provides a snapshot of our current reality across the spectrum and the varied insightful opinions out there. Where relevant and in the spirit of decentralized knowledge sharing and sense-making, I also highlight or emphasize certain perspectives from some of the best resourced research and consulting organizations and thought leaders as well as ideas and thoughts from people that might not be well known in many circles but have important perspectives that needs to be considered as part of synthesizing a more balanced view. To get a proper grip on and understanding of other people’s point of view, it is important to steel man their opinions instead of straw manning it. In this book I share many different perspectives on AI’s impact on society and its potential benefits, risks, concerns, challenges, progress, lessons learnt, limitations, future paths, and research priorities. One example is making sense of the debates on AI’s future path and impact on humanity, which is like a roller-coaster ride of disparate ideas and thoughts from a wide spectrum of experts and people of all walks of life and driven by a combination of trepidation and enthusiasm about the monumental risks and opportunities that AI presents in the 21st century and beyond. I also share specific solutions to address AI’s potential negative impacts, designing AI for social good and beneficial outcomes, building human-compatible AI that is ethical and trustworthy, addressing bias and discrimination, and the skills and competencies needed for a human-centric AI-driven workplace.” In the book I also introduce a Massive Transformative Purpose for Humanity and its associated goals to help shape a beneficial human-centric future (which complements the United Nations’ 2030 vision and SDGs) along with Sapiens (sapiens.network) as a decentralized human-centric user-controlled AI-driven super platform to empower individuals and monetizes their data and services and can be extended to extended to companies, communities, cities, city-states, and beyond.
I would like acknowledge Francois Chollet who is an AI researcher and practitioner and the developer of the Keras deep learning library. I have also referenced some of his thoughtful contributions and insights in the chapters and sections dealing with risks, concerns, and challenges of AI for society; making sense of the AI debates; human intelligence versus machine intelligence; lessons learnt, limitation, and current state-of-the-art in AI; progress, priorities, and likely future paths for AI; the meaning of life; and the importance of democratizing AI. As mentioned in the book, Francois also wrote another excellent blog What worries me about AI “that absolutely resonated with me on multiple fronts, and which are in line with my concerns about AI and how we can address this by democratizing AI with practical solutions where AI helps us – all in line with the proposed MTP for Humanity and associated goals.”
A few years ago, I participated in a Future of Life Institute event with Max Tegmark (Physics professor and AI researcher at MIT and the President of the Future of Life Institute) where we specifically informed African leaders about the dangers and risks involved with lethal autonomous weapon systems. I have also cited some work by the Future of Life Institute and Max Tegmark’s Life 3.0: Being Human in the Age of Artificial Intelligence in the chapters and sections dealing with AI’s impact on society; risks, concerns, and challenges of AI for society; making sense of the AI debates; human intelligence versus machine intelligence; priorities and likely future paths for AI; and various potential outcomes for the future of civilization. The following quote by Max Tegmark also resonates with me: “Everything we love about civilization is a product of intelligence, so amplifying our human intelligence with artificial intelligence has the potential of helping civilization flourish like never before – as long as we manage to keep the technology beneficial.”
I would like to recognize Joscha Bach, Cognitive Scientist and VP Research at AI Foundation. I have also shared some of his thoughtful contributions and insights in the chapters and sections that cover making sense of the AI debates; human intelligence versus machine intelligence; progress, priorities, and likely future paths for AI; and the meaning of life as we contemplate the beneficial outcomes for humanity in the Smart Technology Era. Joscha regards the classical AI from 1950 to 2013 as first order AI, the current phase with systems that learn as second order AI, meta learning (learning about learning) as third order AI and asks if fourth order AI is about the general theory of search.
I would like to put a spotlight on Trent McConaghy, one of the cofounders of Ocean Protocol, whom I met at the AI for Good Global summit in Geneva Switzerland a few years ago where he also talked about democratizing data and developing a decentralized substrate for AI data and services through BigChainDB (that powers the Interplanetary Database or IPDB network) and Ocean Protocol. Given Trent’s background as an AI practitioner before joining the blockchain world and combining AI and blockchain, he also has some interesting perspectives on Decentralized Autonomous Organizations (DAOs) and the AI versions of this called AI DAOs as well as the future of humanity in the face of AI and blockchain type of technologies. In the book I also reference Ocean Protocol and some of his thoughts in the chapter on beneficial outcomes for humanity in the Smart Technology Era as well as the last chapter where I introduce Sapiens (sapiens.network) as a decentralized human-centric user-controlled AI-driven super platform to empower individuals and monetizes their data and services and can be extended to extended to companies, communities, cities, city-states, and beyond. As distributed ledger technologies have a key role to play in the decentralized AI-driven user-controlled super platform stack, I believe technology such as Ocean Protocol should be more widely adopted.
I have also cited Dorine van Norren‘s work on a cross-cultural comparison of the South African philosophy of Ubuntu, the Buddhist Gross National Happiness or Bhutan and the native American idea of Buen Vivir from Ecuador. Dorine, with whom I also recently participated as a speaker in the Africa Knows Conference, outlines the perspectives of these three worldviews on the United Nations’ sustainable development goals (SDGs), and specifically “how they view ‘development’, ‘sustainability’, goals and indicators, the implicit value underpinnings of the SDGs, prioritization of goals, and missing links, and leadership.” As mentioned in the book, she argues that “although the SDGs contain language of all three these specific worldviews, it is evident that Western ‘modernism’ has a dominant influence with individualism more represented and private sector responsibility lacking to a certain extent as opposed to having sharing, collective agency, and the human-nature-wellbeing interrelationship better incorporated.” Dorine therefore recommends “a reinterpretation of the SDG framework and globalization in general by finding common ground between Western modernism, Ubuntu, Happiness, and Buen Vivir.”
Lex Fridman, who is an AI Researcher at MIT Researcher and YouTube Podcast Host has also acted as a wonderful portal to many researchers, businesspeople, and other intellectual thinkers over the past few years through his YouTube channel. I have referenced him in the chapters and sections that address the debates, progress, priorities, and likely future paths of AI; making sense of the AI debates; beneficial outcomes for humanity in the Smart Technology Era, and what does it mean to be human and living meaningful in the 21st century. Lex has this habit of asking people on his podcast about the meaning of life. In one of the chapters I have a section where I highlight the essence of a wide variety of meaningful responses to this question paraphrased which helps to provide some further rich insights into how modern-day thoughtful people think about this. It is also interesting to see how they fit into Maslow’s 8-stage hierarchical motivation model as well as the schools of philosophies’ classification framework of supernatural, subjective, objective, and no meaning. This is all part of the foundation layer as we contemplate beneficial outcomes for humanity and how we can democratize AI to help shape a beneficial human-centric future.
There are many well informed and thoroughly researched perspectives about the state of our civilization and our current trajectory. As mentioned in the book, I found the one presented by Daniel Schmachtenberger to be not only thoughtful and insightful, but one that we should pay attention to. Daniel’s core interest is focused on long term civilization design and more specifically to help us as a civilization to develop improved sense-making and meaning-making capabilities so that we can make better quality decisions to help unlock more of our potential and higher values that we are capable of. The book references some of his thoughts in the chapters and sections dealing with what does it mean to be human and living meaningful in the 21st century; the problematic trajectory of our current civilization; and beneficial outcomes for humanity in the Smart Technology Era. In order to help accelerate a cultural movement toward much improved sense-making and conversation, Daniel Schmachtenberger and others have recently founded the Consilience Project which is a non-profit media organization that has as a goal to assist with the repairing and rebuilding the health of the information commons “by helping educate people on how to improve their information processing so they can better detect media bias and disinformation while becoming more capable sense-makers and citizens”.
I also cited some of the excellent work done by Erik Brynjolfsson (Economics professor at Stanford University) and Andrew McAfee (Research scientist at MIT) via their books such as The Second Machine Age as well as Machine Platform Crowd: Harnessing Our Digital Future. Some of their thoughts and research provides further context to the Smart Technology Era where they see the second machine age as the start of the digital and information revolution; AI’s impact on the workplace, employment, and the job market; the risks, concerns and challenges of AI for society; and AI-driven platform businesses. As Democratizing Artificial Intelligence to Benefit Everyone provides a sense-making journey to help shape a beneficial human-centric future that benefit everyone, it also references Andrew McAfee and Erik Brynjolfsson in Machine Platform Crowd that describes three rebalancing acts needed in the Smart Technology Era which involves human-AI collaboration, the products versus platform balance and inhouse company know-how versus contributions and participation from communities and multitudes of people.
I would also like to briefly highlight some of AI researcher Stuart Russell’s work which I have also referenced in my book. In the chapter on the debates, progress and likely future paths of AI, there is a specific section that aims to make sense of the AI debates and also shares Stuart’s position and reasoning on the importance of AI safety as also communicated in his book Human Compatible: Artificial Intelligence and the Problem of Control. He actually refers to the AI debates as “The Not-So-Great AI Debate” where he addresses so-called AI denialist arguments to not look seriously at poorly designed super intelligent AI systems that could present an existential risk to humanity. As the premise of this book and my massive transformative is on maximizing the social benefit of AI, I identify with the Research Priorities for Robust and Beneficial Artificial Intelligence report by Stuart Russell, Daniel Dewey, and Max Tegmark as communicated via The Future of Life Institute website.
The recent Turing Award winners Geoff Hinton, Yoshua Bengio, and Yann LeCun for their deep learning research related work has also been referenced in the making sense of the AI debates; lessons learnt, limitations and current state-of-the-art in AI; and progress, priorities, and likely future paths of AI. Whereas researchers such as Yoshua Bengio, Yann LeCun, and Geoff Hinton favor working with neural network type of approaches that focus on learning methods for supervised and self-supervised learning that are not necessarily dependent on specific structures, researchers such as Gary Marcus and Oren Etzioni are working on bringing in hybrid structures that can deal with symbolic methods for logic and reasoning. Their perspectives have also been cited in the making sense of the AI debates, lessons learnt and limitations of AI as well as progress, priorities and likely future paths of AI. As a psychology and neuroscience researcher at New York University, Gary Marcus does not think that developing human-level intelligence requires replicating exactly the way the human brain works. In a recent book called Rebooting AI: Building Artificial Intelligence We can Trust which Gary co-authored with Ernest Davis, they discusses the limitation of deep learning which is good at perceptual pattern classification using bottom-up information, but not good at commonsense reasoning for which symbol manipulation via mathematics or language processing might be a more suitable solution as part of a trustworthy AI system that has commonsense values and reasoning built-in. Oren Etzioni, the CEO of the Allen Institute of AI, and his team are amongst others working on Project Mosaic which focuses on building common sense in an AI system which is one of the key features of human-level intelligence
Demis Hassabis, AI researcher and CEO of Deepmind, has made it clear since the inception of Deepmind that their development of neuroscience-inspired AI systems is driven by the mission to solve intelligence and advance scientific discovery for all. I have referenced him in chapters and sections dealing with our responsibility in directing AI and making sense of the AI debates.
Elon Musk, serial entrepreneur and founder of companies such as Tesla, SpaceX, and Neuralink, has also been referenced in chapters and sections that cover AI-driven transportation, making sense of the AI debates and beneficial outcomes for humanity, and the proposed MTP for Humanity and associated MTP goals. The final two MTP goals are all about ensuring the best possible livable habitat here on Earth for humanity and other life forms, reducing our dependence on animal life for food and any unhealthy processed foods, being considerate of other living organisms, make life multi-planetary (in line with Elon Musk’s vision), extract and make use of resources from beyond Earth, and explore the universe through our advancing smart technology.
Kai-Fu Lee, AI investor, entrepreneur and author of AI Superpowers, has been cited in chapters and sections concerning AI’s transformative impact on our world; some brief history highlights of AI; the four waves harnessing AI in different ways which include Internet AI, Business AI, Perception AI, and Autonomous AI; AI’s impact on the workplace, employment, and the job market; transformative AI for personalized education; AI’s impact on society; risks, concerns and challenges of AI for society; and 21st century skills, competencies, and jobs for a human-centric AI-driven workplace.
The chapter dealing with the debates, progress and likely future paths of AI also references some strong technological utopian proponents such as roboticist Hans Moravec and author of Mind Children: The Future of Robot and Human Intelligence as well as Ray Kurzweil, who is currently Director of Engineering at Google and has written books on the technology singularity, futurism, and transhumanism such as The Age of Spiritual Machines and The Singularity is Near: When Humans Transcend Biology. Rodney Brooks, who is the co-founder of iRobot and Rethink Robotics, thinks that the long-term timing for AI is being crudely underestimated as stated by Amar’s law that “we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run”. It is not surprising to see Ray Kurzweil and Rodney Brooks at opposite ends of the timeline prediction. As mentioned in the book “Whereas Ray is a strong advocate of accelerating returns and believe that a hierarchical connectionist based approach that incorporates adequate real-world knowledge and multi-chain reasoning in language understanding might be enough to achieve strong AI, Rodney thinks that not everything is exponential and that we need a lot more breakthroughs and new algorithms (in addition to back propagation used in Deep Learning) to approximate anything close to what biological systems are doing especially given the fact that we cannot currently even replicate the learning capabilities, adaptability or the mechanics of insects. Rodney reckons that some of the major obstacles to overcome include dexterity, experiential memory, understanding the world from a day-to-day perspective, comprehending what goals are and what it means to make progress towards them. Ray’s opinion is that techno-sceptics are thinking linearly, suffering from engineer’s pessimism and do not see exponential progress in software advances and cross fertilization of ideas.”
Eliezer Yudkowsky, co-founder and research fellow at the Machine Intelligence Research Institute and popularizer of the friendly AI concept, has introduced Fun Theory which is the field of knowledge that helps us to imagine utopias. His work is also cited in the section about ideas for reshaping our civilization for beneficial outcomes.
Ray Dalio a billionaire hedge fund manager and philanthropist and co-founder of the world’s largest hedge fund, Bridgewater Associates, has also been referenced in the sections that address fixing capitalism, the meaning of life, and ideas for reshaping our civilization for beneficial outcomes and the MTP for Humanity and the MTP goals. As Ray Dalio says, “Truth – or, more precisely, an accurate understanding of reality – is the essential foundation for any good outcome”. As I have mentioned in the book’s introduction: “It is important to have dreams, but it needs to be grounded in reality, molded by collective intelligence and wisdom, and converted into clear realistic goals and plans that can be relentlessly executed in an adaptive and agile fashion with passion and determination. That would lead us to success.”
Eric Posner and Glen Weyl are also referenced in the section that addresses analyzing issues and ideas for reshaping our civilization for beneficial outcomes. As mentioned in the book, they present market-based ideas in Radical Markets: Uprooting Capitalism and Democracy for a Just Society “ that can help reshape the markets and society with greater equality and reciprocity and address the “crisis of the liberal order” which entails the inequality within wealthy countries, the drop in economic and productivity growth rates which causes economic stagnation, the decline in employment and the struggle of democracies to handle conflicts between minorities and majorities within countries.” Max Borders is also referenced as he shows in The Social Singularity – A Decentralist Manifesto that humanity is already building systems and infrastructure that will transform and replace society’s current mediating structures and centers of power. In the same section Eric Weinstein and Peter Thiel are also cited where they mention specifically the current era of stagnation as starting from the 1970’s except for the world of bits and Silicon Valley. As mentioned in the book, “Eric Weinstein also discusses a further complication of ideas being suppressed by protecting academic, media, economic, government and other institutions from individuals or groups of people who might have valid and reasonable ideas that do not fit into the mainstream institutional narratives and possibly highly disruptive to an institutional order. He refers to this as a Distributed Idea Suppression Complex (DISC) which consists of a decentralized and distributed collection of different emergent structures that not only suppresses ideas but has led to lack of meaningful progress in some areas, significant income inequality and social unrest. The democratization of AI, smart technology, science, and knowledge in general can help to address some of these problems.”
John Rawls, author of A Theory of Justice and proponent of egalitarian liberalism, has specifically been referenced in chapters and sections that cover AI’s impact on society; analyzing issues and ideas for reshaping our civilization for beneficial outcomes; and addressing bias and discrimination.
Yuval Harari, author of Sapiens, Homo Deus and 21 Lessons for the 21st Century has also been referenced in the chapters and sections dealing with the Smart Technology Era as he discusses the twin revolutions in information technology and biotechnology within the context of the Scientific Revolution. Other references include those sections that cover some of AI’s challenges and rewards; our responsibility in directing AI; AI’s impact on the workplace, employment, and the job market; AI’s impact on society; analyzing issues and ideas for reshaping our civilization for beneficial outcomes; various potential outcomes for the future of humanity; and building human-compatible, ethical, trustworthy, and beneficial AI.
Richard Baldwin, author of The Globotics Transformation has coined these new forms of globalization and robotics into a new word called “globotics”, where tele-migrants and white-collar robots coming for the same jobs at the same time are driven by the same digital technologies. This globotics transformation applied to the services sector has an amazingly fast and unfair impact on societies, effectively disrupting the services sector in a significant way. The result is an upheaval, a so-called Globotics Upheaval, and a backlash for which we need a resolution. Richard has been referenced in the chapter and sections that discuss the Smart Technology Era; AI’s impact on the workplace, employment, and the job market; analyzing issues and ideas for reshaping our civilization for beneficial outcomes; and 21st century skills, competencies, and jobs for a human-centric AI-driven workplace.
Calum Chace, author of The Economic Singularity’s provides further context to the Smart Technology Era by discussing this new transformation within the context of the Information Revolution where the dramatic growth in the capability of AI leads first to an economic singularity and then possibly a technological singularity. He has also been cited in the chapters and sections that address some of AI’s challenges and rewards; AI’s impact on the workplace, employment and the job market; AI-powered personalized precision healthcare; various potential outcomes for the future of civilization; building human-compatible, ethical, trustworthy, and beneficial AI, and addressing bias and discrimination.
Klaus Schwab, Founder of the World Economic Forum and author of The Fourth Industrial Revolution and Shaping the Fourth Industrial Revolution has been quoted in the chapters and sections dealing with the Smart Technology Era and the potential benefits of AI for society and social good. He also makes the following call that speaks further to the societal benefits of the Smart Technology Era: “The new technology age, if shaped in a responsive and responsible way, could catalyze a new cultural renaissance that will enable us to feel part of something much larger than ourselves – a true global civilization. The Fourth Industrial Revolution has the potential to robotize humanity, and thus compromise our traditional sources of meaning – work, community, family, identity. Or we can use the Fourth Industrial Revolution to lift humanity into a new collective and moral consciousness based on a shared sense of destiny. It is incumbent on us all to make sure that the latter is what happens.”
Martin Ford, author of The Rise of the Robots and Architects of Intelligence has been referenced in chapters and sections dealing with AI’s impact on the workplace, employment, and the job market; the risks, concerns and challenges of AI for society, and the making sense of the AI debates. Martin argues that AI-driven systems and solutions are on the brink of extensive automation of white-collar jobs. His interviews with twenty-three of the leading researchers, practitioners and others involved in the AI field was also insightful and help to make sense of the AI debates.
John Brockman, editor and author of Possible Minds: Twenty-five Ways of Looking at AI where he and other authors debate the future of AI also provided an excellent resource for the chapter and sections that cover making sense of the AI debates, human intelligence versus machine intelligence, and lessons learnt and limitations of AI. They also reference Norbert Wiener’s The Human Use of Human Beings in 1950 which appears to be as relevant as ever in 2020 and beyond as he conveys his worry at that time about the uncontrolled commercial exploitation and other unexpected consequences of advanced technologies.
Some other specifically referenced in the sections that address making sense of the AI debates, current limitations of AI, as well as human intelligence versus machine intelligence include Stephen Wolfram, Scientist, CEO of Wolfram Research and author of A New Kind of Science; Frank Wilczek, a Professor Physics at MIT, author of A Beautiful Question: Finding Nature’s Deep Design and recipient of the 2004 Nobel Prize in Physics; Lisa Feldman Barrett, a Professor of Psychology and Neuroscience at Northeastern University and author of books such as Seven and a Half Lessons About the Brain and How Emotions Are Made; Jeff Hawkins, the co-founder and CEO of Numenta; Rich Sutton, a Professor of Computer Science at University of Alberta, and Research Scientist at DeepMind; John Launchbury, a director at Defense Advanced Research Projects Agency (DARPA); Jürgen Schmidhuber, AI Researcher and Scientific Director at the Swiss AI Lab IDSIA; Ben Goertzel, SingularityNET’s CEO and developer of the software behind a social, humanoid robot called Sophia; Andrew Ng, an Adjunct Professor of Computer Science at Stanford and previously chief scientist at Baidu as well as a co-founder of the Google Brain project, Coursera (an online education company), Landing AI and a venture capital company AI Fund that builds AI start-ups; Daphne Koller, a Professor of Computer Science at Stanford and co-founder of Coursera with Andrew as well as CEO and Founder of biotech startup Insitro; Fei-Fei Li, a Professor of Computer Science at Stanford and Chief Scientist of Google Cloud; Jeff Dean, who is the Director for AI and head of Google Brain; Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies and founder of the Future of Humanity Institute at Oxford University; Judea Pearl, a Professor of Computer Science and Statistics at University of California, and author of books such as “Heuristics”, “Probability Reasoning, Causality” and “The Book of Why”; Rana el Kaliouby, the CEO of Affectiva; Daniela Rus, the Director of Computer Science and AI at MIT; Cynthia Breazeal, a Director of the Personal Robotics Group at the MIT Lab and Founder of Jibo; Josh Tenenbaum, a Professor of Computational Cognitive Science at MIT; David Ferrucci, CEO and Chief Scientist at Elemental Cognition (in partnership with Bridgewater Associates) and previous head of IBM Watson; James Manyika, a senior partner at McKinsey & Company and chairman of McKinsey Global Institute; OpenAI’s Sam Altman (CEO), Greg Brockman (CTO) and Ilya Sutskever (Chief Scientist); Seth Lloyd, a theoretical physicist at MIT; George Dyson, a historian of science and technology; Daniel C. Dennett, a Professor of Philosophy at Tufts University and author of a number of books such as Consciousness Explained; Jaan Tallinn, a computer programmer and co-developer of Skype and investor; Steven Pinker, a Professor of Psychology at Harvard University and author of a number of books including Enlightenment Now: The Case for Reason, Science, Humanism, and Progress; David Deutsch, quantum physicist at Oxford University and author of The Fabric of Reality and The Beginning of Infinity; Tom Griffiths, a Professor of Information, Technology, Consciousness, and Culture at Princeton University and co-author of Algorithms to Live By; Anca Dragan, an Assistant Professor of Electrical Engineering and Computer Science at UC Berkeley; Chris Anderson, CEO of 3DR, former editor-in-chief of Wired, and author of The Long Tail, Free, and Maker; David Kaiser, a Professor of the History of Science as well as of Physics at MIT; Neil Gershenfeld, a Physicist and Director of MIT’s Center for Bits and Atoms; W. Daniel Hillis, a Professor of Engineering and Medicine at USC and the author of The Patterns on the Stone: The Simple Ideas that make Computers Work; Venki Ramakrishnan, a Molecular Biology Scientist at Cambridge University, Nobel Prize winner in chemistry and author of Gene Machine: The Race to Discover the Secrets of Ribosom; Alex Pentland, a Professor of Media Arts and Sciences at MIT and author of Social Physics; Hans Ulrich Obrist, Artistic Director of the Serpentine Gallery in London and author of Ways of Curating and Lives of the Artists, Lives of the Architects; Caroline A. Jones, a Professor of Art History at MIT and author of Eyesight Alone; Alison Gopnik, a Developmental Psychologist at UC Berkeley, and author of books that include The Philosophical Baby; Peter Galison, a Science Historian and Professor at Harvard University and author of Einstein’s Clocks, Poincaré’s Maps: Empires of Time; George M. Church, a Professor of Genetics at Harvard Medical School and co-author of Regenesis: How Synthetic Biology Will Reinvent Nature and Ourselves; George Hotz, a programmer, hacker, and the founder of Comma.ai; Carlos Perez at Intuition Machine; Piero Scaruffi, a freelance software consultant and writer, is even more of a techno-skeptic and wrote in Intelligence is not Artificial – Why the Singularity is not coming any time soon and other Meditations on the Post-Human Condition and the Future of Intelligence.
In the section What does it mean to be human and living meaningful in the 21st century some interesting and thoughtful perspectives were provided by the following people not already mentioned above: Manolis Kellis, a professor at MIT and head of the MIT Computational Biology Group; Simon Sinek, an author of books such as Start With Why, Leaders Eat Last, and The Infinite Game; Noam Chomsky, a renowned linguist, philosopher, cognitive scientist, historian, social critic, and political activist; David Chalmers, a philosopher and cognitive scientist specializing in philosophy of mind, philosophy of language, and consciousness; Andrew Huberman who is a neuroscientist at Stanford University; Yaron Brook, an objectivist philosopher, podcaster, and author; Karl Friston, a renowned neuroscientist who also introduced the free energy principle; Ian Hutchinson, a nuclear engineer, and plasma physicist at MIT; Scott Aaronson, a professor specializing in quantum computing at UT Austin; Matthew Johnson, a professor and psychedelics researcher at Johns Hopkins; Joe Rogan, a comedian, Ultimate Fighting Championship commentator, and the host of the Joe Rogan Experience; Sheldon Solomon, a social psychologist, a philosopher, co-developer of Terror Management Theory and co-author of The Worm at the Core; Dawn Song, a professor of computer science at UC Berkeley; Jack Dorsey, the CEO of Twitter and Square; David Silver, who leads the reinforcement learning research group at DeepMind; Dan Kokotov, a VP of Engineering at Rev.ai; Diana Walsh Pasulka, a professor of philosophy and religion at UNCW and author of American Cosmic: UFOs, Religion, and Technology; Russ Tedrake, a roboticist and professor at MIT and vice president of robotics research at TRI; Alex Filippenko, an astrophysicist and professor of astronomy at Berkeley; Dmitri Dolgov, the CTO of an autonomous vehicle company called Waymo; Grant Sanderson, the creator of 3Blue1Brown math education channel on YouTube; Sara Seager, a planetary scientist at MIT and known for her work on the search for exoplanets; and Dileep George, a brain-inspired AI researcher and co-founder of Vicarious.
As mentioned in the introduction, this is not an exhaustive list. Many thanks also to many other people, companies and organizations for their inspiration, insights, and wisdom.
Let us together shape a better future in the Smart Technology Era!
Some video & audio links: https://www.linkedin.com/pulse/ai-perspectives-jacques-ludik