Newsletter Subject

Smarter Than Human PhDs

From

brownstoneresearch.com

Email Address

feedback@e.brownstoneresearch.com

Sent On

Mon, Sep 16, 2024 09:15 PM

Email Preheader Text

Smarter Than Human PhDs By Jeff Brown, Editor, The Bleeding Edge -----------------------------------

[The Bleeding Edge]( Smarter Than Human PhDs By Jeff Brown, Editor, The Bleeding Edge --------------------------------------------------------------- So often, the battle for power and control is built on a philosophical argument. And most often, the stance is positioned to appear as if it’s on the moral high ground. This is especially true when one group has been making – or has made – remarkable progress. The ouster can’t build a case on failure to deliver to its constituents, so they fall back on an argument positioned as moral superiority. We need to “be safe,” “responsible,” and “take time to put the proper safeguards in place.” After all, who can argue with that? And the incumbents are positioned as “reckless,” “irresponsible,” “taking on too much risk,” and deemed a “threat.” This was precisely the situation in the attempted ousting of Sam Altman of OpenAI back in November last year. I wrote about the topic at the time in [Outer Limits – The Drivers Behind Sam Altman’s Ousting](. Corporate dramas like this happen all the time. It doesn’t matter if it’s a large, powerful public company or a high-growth startup – it’s human nature. But that’s not what made the ousting so interesting… The Ouster While Altman’s firing itself made for good headlines, the catalyst is what’s important. The ouster was Ilya Sutskever, one of the most well-known executives in the world of artificial intelligence (AI). His doctoral supervisor was Geoffrey Hinton, one of the three godfathers of AI. The startup that Sutskever spun out of the University of Toronto with Hinton – DNN Research – was acquired by Google in 2013. While at Google, Sutskever worked on developing TensorFlow, which has become one of the most important open-source software libraries for machine learning (ML) and AI. (I wrote about the launch of TensorFlow right here in [The Bleeding Edge – Google Offers Open-Source Quantum Computing Library for Developers]( Sutskever also contributed to developing AlphaGo, the breakthrough deep-learning AI capable of easily beating the world’s best human Go player. It wasn’t a surprise when Sutskever left Google to found OpenAI with Sam Altman in 2015. By 2019, the power duo was able to raise $1 billion in funding. And by early 2023, they had pulled in $10 billion thanks to the incredible progress they had made with just the first billion – specifically, releasing ChatGPT. Sutskever, as an expert and an insider, saw the rapid pace of development of AI at OpenAI. The pace of the advancement even took him by surprise. He was vocal about issues related to “AI safety” and concerned about the “risk to humanity” and the “potential harms caused by AI.” The rhetoric resulted in an open dispute between Altman and Sutskever… and a crisis for OpenAI. This was the “moral high ground” veneer that Sutskever used to convince the board of OpenAI to remove Altman from OpenAI. Altman was positioned as greedily racing toward shorter-term commercialization at any cost and uncommitted to the company mission of creating AI that “benefitted humanity,” which would require slowing the pace to put some kind of arbitrary safety protocols in place. It worked, but only for a few days. Recommended Link [Next AI Shock: November 19th?]( [image]( Thanks in part to Bill Gates and his Stargate project, we could have an AI market shock as soon as November 19th. Most people don't realize we're moments away from a seismic shift. If you want a chance to end up on the winning side of this shift... [Click here now and I'll give you all the details.]( -- The Very Real Threat When Altman’s ousting was made public, several high-profile executives at OpenAI also tendered their resignations. And then the majority of the company signed a letter indicating that they would resign, as well. Had that been allowed to happen, OpenAI would have imploded… Which is why it didn’t. The board promptly invited Altman back, Sutskever publicly “regretted” what he had done and then quietly took the back seat at OpenAI. The problem was that Sutskever’s philosophical argument, as is often the case, wasn’t a real threat. It was theoretical. How do we know this? How can we be certain Sutskever didn’t see something at OpenAI that should give us pause? For one, it ignored the risk of not moving forward – of not accelerating. In other words, it ignored the reality that by slowing down and putting artificial controls in place in the name of “keeping everyone safe,” it would increase the likelihood that a real adversary would overtake the Western development of artificial general intelligence (AGI). Because let’s face it, just because Ilya Sutskever says we should slow down and take our time developing artificial intelligence, does that mean our adversaries will immediately agree to do the same? Absolutely not. And that is a very real threat. How is that real? Because we know how much China, among others, is already investing in and committed to developing artificial general intelligence. And they’ve been at it for years. Unlike the theoretical threats pushed by AI moral grandstanders, this is not theoretical in the slightest. It is a stated priority by China’s government to become the world’s AI superpower by 2030, just six years from now. ([The Bleeding Edge – Washington’s Radical Shift Toward Nuclear]( is a must-read on this.) Curiously, in the months that followed Altman’s ousting and subsequent return, Sutskever stepped down from OpenAI… And he started his own firm, Safe Superintelligence. It just raised $1 billion at a $5 billion post-money valuation for its seed round. How’s that for a company launch? As the name implies, the goal is to develop “safe artificial intelligence” that surpasses human intelligence. There’s that veneer again… It suggests that the rest of the industry isn’t trying at all. Which is complete nonsense. Both Alphabet (Google), Meta, and the Microsoft/OpenAI camps are designing with their own versions of “safety” in mind… which equates to a heavily biased AI that censors certain information and even tries to rewrite history. Some view this as “safe” while others view this as “dangerous.” Others like Anthropic and xAI are making concerted efforts to develop a far more neutral, fact-based AI hopefully devoid of bias. Early Signs of AGI When Sutskever left OpenAI in May, it was the beginning of many rumors that there was a brain drain at OpenAI. “The best are leaving,” they said. “OpenAI’s days are numbered…” was another whisper. How fickle. But we knew differently. There had already been talk of what OpenAI had been working on in the laboratory… It was showing signs of early AGI. They called it Q* (pronounced Q star). And last month, we learned that this project was now known as Strawberry. And it’s a precursor to what will eventually become known as Orion, which is believed to be the name of OpenAI’s next-generation, multi-modal large language model (LLM). We explored these developments in [The Bleeding Edge – AI’s Need For Speed](. The facts tell a very different story about what’s happened at OpenAI… OpenAI was racing ahead and focused on improving the ability of its AI to reason, something that has been a challenge for LLMs. Smartly, Altman has been proactive in speaking with government officials and providing demonstrations of the technology, which had not been made public. It’s better to have an open dialog with the government than to wake up one morning only to find that your company has been sequestered in the interests of “national security.” Better yet, last Thursday, the world got a preview of what’s to come… OpenAI released a preview of its latest LLM “o1.” And anyone who subscribes to ChatGPT can select it and experiment with a preview of the latest model. The results are absolutely stunning. o1 (shown above in orange and pink) is demonstrating a remarkable leap above GPT-4o (shown in teal). It’s such a large leap, it’s almost hard to believe. Just look at the images above. The increase in competition math, which has historically been difficult for LLMs, has skyrocketed. The same is true for competitive software coding. And just look at the results of the GPQA analysis. GPQA is a fairly new benchmark for AI. It stands for Graduate Level Google-Proof Questions and Answers benchmark. The test is a dataset of complex questions in biology, physics, and chemistry that require domain expertise to answer correctly and are hard to answer even with the help of a search engine like Google. Highly skilled human non-experts are only able to achieve a score of 34% with the use of Google. GPT-4 was only able to achieve 39% accuracy, and GPT-4o only demonstrated 56% accuracy. But just look at the performance in the chart on the right above! Both o1-preview and o1 were able to achieve 78% accuracy, higher than an expert human. OpenAI’s latest model has now surpassed human PhD-level intelligence. It appears that the detractors and the [decels (decelerationists)]( were dead wrong, as they almost always are. The gaslighting about the threats and the safety risk – as well as OpenAI’s impending slide into irrelevance – were all nonsense. The company has just released something capable of incredible productivity and societal good. This is a major leap in terms of intelligence and reasoning that will lead to more positive breakthroughs in more fields than we can name. And it will lead to a world of abundance. The correct moral framework is to build and improve… To create technology that will become of immense value to society… Technology that will lead to nuclear fusion and limitless clean, cheap electricity – capable of powering the planet’s growing power demands and bringing the last 700 million people out of poverty ([The Bleeding Edge – Should We Scale Back Our Energy Consumption?]( Technology that will unlock the secrets of human biology, so that we may reduce and eliminate human disease and suffering ([The Bleeding Edge – The Tech That Will Change the Economics of Biotech]( Technology that will help us discover hundreds of thousands of new synthetic materials, so that we may build stronger and longer-lasting infrastructure, reactors, and computing systems. ([The Bleeding Edge – DeepMind’s Latest AI Breakthrough](. It is right and just to accelerate. There are huge, complex problems to solve. And it won’t happen if the world panders to pontificators and fearmongers. We must keep building. [Brownstone Research]( Brownstone Research 1125 N Charles St, Baltimore, MD 21201 [www.brownstoneresearch.com]( To ensure our emails continue reaching your inbox, please [add our email address]( to your address book. This editorial email containing advertisements was sent to {EMAIL} because you subscribed to this service. To stop receiving these emails, click [here](. Brownstone Research welcomes your feedback and questions. But please note: The law prohibits us from giving personalized advice. To contact Customer Service, call toll free Domestic/International: 1-888-512-0726, Mon–Fri, 9am–7pm ET, or email us [here](mailto:memberservices@brownstoneresearch.com). © 2024 Brownstone Research. All rights reserved. Any reproduction, copying, or redistribution of our content, in whole or in part, is prohibited without written permission from Brownstone Research. [Privacy Policy]( | [Terms of Use](

EDM Keywords (235)

xai wrote world working worked words whole well want wake vocal view versions veneer use unlock university uncommitted trying true toronto topic time threats threat thousands theoretical test terms technology tech teal talk take sutskever surprise suggests suffering subscribes subscribed strawberry startup started stands stance speed speaking soon solve smarter slowing slow slightest skyrocketed situation service sequestered sent select secrets score safety safe risk right results rest resignations redistribution reasoning realize reality real questions put pulled project problem proactive preview precursor precisely powering power poverty positioned pontificators planet place pink performance people part pace ousting ouster orion orange openai one often o1 numbered nonsense need name months mind mean may matter making majority made look llms likelihood let leaving learned lead launch laboratory known know kind irrelevance interests interesting intelligence industry incumbents increase improving improve important imploded images ignored humanity historically help hard happened happen government google goal give gaslighting funding focused firing find fields fickle feedback fearmongers far failure face explored expert experiment equates ensure end economics done difficult developments development develop detractors details designing demonstrating deliver deemed days dataset curiously crisis could cost convince control content constituents concerned company committed china chemistry chatgpt chart change chance challenge catalyst case called built build bringing board better best believed believe beginning become battle argue appears appear anyone altman already allowed ai agi adversaries acquired achieve accelerating accelerate abundance absolutely able ability 34 2030 2019 2015 2013

Marketing emails from brownstoneresearch.com

View More
Sent On

18/10/2024

Sent On

17/10/2024

Sent On

16/10/2024

Sent On

15/10/2024

Sent On

15/10/2024

Sent On

14/10/2024

Email Content Statistics

Subscribe Now

Subject Line Length

Data shows that subject lines with 6 to 10 words generated 21 percent higher open rate.

Subscribe Now

Average in this category

Subscribe Now

Number of Words

The more words in the content, the more time the user will need to spend reading. Get straight to the point with catchy short phrases and interesting photos and graphics.

Subscribe Now

Average in this category

Subscribe Now

Number of Images

More images or large images might cause the email to load slower. Aim for a balance of words and images.

Subscribe Now

Average in this category

Subscribe Now

Time to Read

Longer reading time requires more attention and patience from users. Aim for short phrases and catchy keywords.

Subscribe Now

Average in this category

Subscribe Now

Predicted open rate

Subscribe Now

Spam Score

Spam score is determined by a large number of checks performed on the content of the email. For the best delivery results, it is advised to lower your spam score as much as possible.

Subscribe Now

Flesch reading score

Flesch reading score measures how complex a text is. The lower the score, the more difficult the text is to read. The Flesch readability score uses the average length of your sentences (measured by the number of words) and the average number of syllables per word in an equation to calculate the reading ease. Text with a very high Flesch reading ease score (about 100) is straightforward and easy to read, with short sentences and no words of more than two syllables. Usually, a reading ease score of 60-70 is considered acceptable/normal for web copy.

Subscribe Now

Technologies

What powers this email? Every email we receive is parsed to determine the sending ESP and any additional email technologies used.

Subscribe Now

Email Size (not include images)

Font Used

No. Font Name
Subscribe Now

Copyright © 2019–2024 SimilarMail.