In just a few hours, a Senate subcommittee led by Senators Hawley and Blumenthal will hold a hearing about formulating an AI license, after which a closed-door meeting will occur, mostly with big tech leaders and Senator Schumer. The legislative branch’s mastery of the AI issue has been shaky thus far. In a potential conflict of interest given the tight knit Silicon Valley ecosystem, Schumer has two children that work for Amazon and Meta.
Senators Hawley and Blumenthal have promised repeatedly to hear from a range of experts in their AI hearings, yet thus far they have omitted critical experts from participation, opting instead to hear from a repetitive cast of Silicon Valley figures who have engineered a stranglehold on the burgeoning AI industry. Interestingly, Blumenthal not long ago tweeted that he is “hoping President Biden & tech companies coming together will provide an initial template & jump start real Congressional action”—that is, that he hopes to abdicate his own legislative powers and responsibilities not just to another branch of government, but to private interests. Progress thus far shows a range of troubling displays—from conflict of interest to misdirecting public power on behalf of particular economic interests—but we would like to draw attention to what is possibly a much larger threat: while Blumenthal and Hawley make statements correctly identifying the threat of autogenerated misinformation causing harm to the voting process, they overlook how the economic interests creating these technologies pose a threat to democracy itself.
It is essential to point out that it is premature for our politicians to be awed—or cowed—by AI. A brief look at its history reveals a predictable hype cycle that has perpetuated itself for decades. Yes, there have been incremental improvements in automation technologies, but true artificial general intelligence remains as elusive as ever. While exempting generative AI from Section 230 is undoubtedly clever, this remains a patch fix rather than a structural solution. It does little to address either the inordinate centralized economic power that has given these organizations the ability to release unrefined tools to the public or the radical beliefs closely associated with their leaders. Indeed, a licensing solution may simply play into their hands, enshrining their position into a legal monopoly, rewarding them both for releasing these tools and techniques without any safeguards and for capitalizing on and confusing existing public fears about true existential risks the country and the world now face.
The work done by the FTC under Lina Khan constitutes a well-informed interpretation of the current state of AI. Thus there is no need for Blumenthal to await some backroom deal led by President Biden or those whom he should be policing. On the contrary, members of the executive branch have already demonstrated and published expertise on the matter. Hawley and Blumenthal will also soon be hosting the Vice Chair and President of Microsoft, a company who the FTC had recently targeted in a major merger deal. Given that Ms. Khan has been the target of economic interests lobbying the Biden administration for her dismissal, should we expect to see either of the two senators ask the Microsoft bigwig if his company is involved, or if he too finds her “anti-American” for doing her job?
The AI hype cycle has a more insidious bent than the usual Silicon Valley FOMO cycle because it is driven by a fundamental unclarity about how the term “artificial intelligence” should be understood—an unclarity that has become weaponized by incumbent business interests and VCs. Indeed, now the distinctions between artificial general intelligence, generative AI, domain-specific AI applications, and even chat-style user interface technology is kept purposely vague by Silicon Valley interests. Part of their motivation, of course, is money: the “carrot on a stick” of truly automated judgment enabled by AGI lures the investment “donkey” to fund the more ho-hum task specific automation possible with current techniques. But there’s more to it.
The current incumbents in AI, like OpenAI and Anthropic, are in an unusual and precarious position: their products (ChatGPT, Dall-E, Claude etc.) are underwhelming when compared to AGI and yet building them required what must be one of the largest thefts in the history of humankind. The massive amount of data that went into training GPT-3, for example, represents countless hours of labor, conversation, insight, creative advance, and personal and public effort of a massive amount of internet users. According to the OpenAI paper “Language Models are Few-Shot Learners” (coauthored by the now-leaders of Anthropic as well as self-identified longtermists), 60% of the tokens used in training GPT-3 came from CommonCrawl, a dataset which, as of 2023, contains 19 million Blogspot pages, 18 million Wordpress pages, 6 million Wikipedia pages, 800 thousand USA Today pages, 700 thousand NASA pages, and on and on.
This is all not to mention the impact on the huge swathes of people who AI incumbents hint they actually seek to replace instead of provide assistive tools for. Indeed, Sam Altman’s mentor Paul Graham smugly asserted that replacing employees with AI will prove that employees do not significantly contribute to building companies and that founders “deserve to be rich.” It would undoubtedly be his ideal if only capital holders like himself and their handpicked young male “founders” were left to shape or, for that matter, even participate in the market.
Strange and insular ideologies are not unusual in Silicon Valley. MIT Technology Review pointed out in 2013 in the presciently-titled “A Free Database of the Entire Web May Spawn the Next Google” that CommonCrawl’s advisors include Peter Norvig and Joi Ito. Both men have particular ideologies to be aware of since they had a hand in shaping ChatGPT’s training dataset. Norvig has written widely to valorize machine learning techniques, in part by controversially arguing that description and statistical approximation are equal in validity and importance to explanation and principal in scientific knowledge. Joi Ito, the disgraced former leader of the MIT Media Lab hosted Jeffrey Epstein (or, as his staff called him, “He Who Must Not Be Named”) on MIT’s campus, poisoning academic pursuits by association with the financier-pedophile. That is to say that CommonCrawl itself should not be assumed to be a neutral endeavor, but rather there may be shared ideology and political motivations infused into many layers of any technology output from current methods.
Incumbent AI companies are now valued at billions of dollars, and yet their products cannot be significantly improved beyond the spot fixes provided by the free training currently being extracted from beta users and the impoverished workers that put the “human” in “reinforcement learning from human feedback” because the core techniques are not suitable to create an AGI.
Indeed, a particularly clever part of the AI hype machine is the effort to build an almost religious mystique around it. The mystique centers around various statements about how “we don’t know how it works.” This is purposely misleading. It is is true insofar as there are aspects of the mechanism that are not meant to be documented and specified in the way a simpler computer program is. But it is not true in the sense that magic is happening—nothing within the black box of an LLM is there for any other reason than convenience. It is not incomprehensible, nor is it of a higher order of thought than what a human is capable of: there is no special “consciousness” or “rationality” at work.
And yet we see OpenAI’s Altman, the self-described “stochastic parrot” comparing himself to a modern-day private-sector version of Oppenheimer, implying ominously that OpenAI holds the keys to technology akin to nuclear weapons. He insists that his industry needs to be regulated to prevent the destruction of the human race while simultaneously accelerating its release into the wild. The inconsistency results from the fact that as high priest of this new superhuman oracle, he will need a seat at the table. Indeed, the more AI moguls understand themselves as oligarchs, the more they become preoccupied not with technical advancement, but with the prevention of rolling heads, as seen in Anthropic co-founder and former policy director of OpenAI Jack Clark’s grandiose tweet thread where he worries that “[p]eople don’t take guillotines seriously. Historically, when a tiny group gains a huge amount of power and makes life-altering decisions for a vast number of people, the minority gets, actually, for real, killed. People feel like this can’t happen anymore.”
Given the shaky foundations, how, then, can these investments be protected? It is simple: by seeking inappropriate protected monopolies, intermediation into governance, and intermediation into defense. Business and capital interests have spent the last decade helping to erode American society and democracy, and lies about AI offer the opportunity for a kill shot. Isn’t this just the best time to have a gerontocracy?
“An expert is someone who articulates the needs of those in power” is a quote often attributed to Henry Kissinger. Silicon Valley business interests are more than happy to fill that role, especially as they become more and more powerful themselves. They will tell each other, their employees, politicians, and the public whatever they want or need to hear to advance their own power, a tautological loop that locks out meaningful technical critique and promises “market growth” to appeal to political powers already too willing to cede authority to privatized economic interests.
Ex-Google CEO Eric Schmidt’s relationships with political figures Henry Kissinger and former president Barack Obama are a perfect manifestation of this power-hungry impulse. Indeed, the entirety of the tiresome current conversation about AI could all be a backroom deal to enrich and empower Eric Schmidt and his allies in the commercial and pseudo-nonprofit sectors.
(Multiple photos showing Obama and Schmidt were recently released. Also included are photos of Obama with Jeff Bezos; of the then-president putting on a Facebook-branded hoodie next to Mark Zuckerberg; and of the soon-to-be Hollywood producer meeting with his soon-to-be new master, Netflix co-founder Reed Hastings.)
Politico wrote that “Schmidt has long sought to influence federal science policy, dating back to his close ties to the Obama administration.” And since at least 2019 to the present year, Kissinger has been working closely with Eric Schmidt to shape the narrative around AI and laud its potentials, insisting that “ChatGPT Heralds an Intellectual Revolution.” Recall that the ancient Kissinger failed spectacularly at spotting the obvious fake that was Theranos, costing his co-investors over 400 million dollars. This is not surprising as he has no special qualifications to vet modern emerging technology. His expertise lies in statecraft. So what type of “revolution” can we realistically expect from large language models, and what is Schmidt and Kissinger’s shared interest?
At its heart, products like ChatGPT or Anthropic’s Claude are simply chat interfaces trained to speak fluent “public relations.” In essence, they replicate the most tedious parts of the internet brought about over the years by Google’s own SEO algorithms. (Meanwhile Google search results “coincidentally” become increasingly poor, repetitive, and avoid directing users to Wikipedia or high-quality primary sources.) They speak with an authoritative and confident tone even though they frequently offer fabricated or flawed information. They prevent people from easily examining methodology and credibility of sources. The same critique can be applied analogously to Dall-e, Midjourney, Stability.ai, and others in the domain of visual media—it is a Mad Libs-esque style mimicker and image generator devoid of artistic expression, and, much of the time, visual coherence.
The result is a suite of technologies that take the “post-truth” era to new heights. We deal now not just with a glut of misinformation and “alternative facts,” but the potential for a standardized monolithic “data” provider that seeks to fill the “market need” of thinking and reasoning itself. Facilitated by its flattened and uniform tone, powerful interests could easily launder their agendas through such a mouthpiece—already, the public is being adjusted to the idea of an authority wielded by an imagined superhuman “rationality,” the wet dream of those wrapped up in various popular techno-utopian cults. This follows the standard “disruption” model of software-based intermediation into existing markets, this time between people and knowledge they themselves created, providing technocrats and their allies with more granular control of information flowing to the public and manipulation of public sentiment itself.
We see Schmidt and Kissinger buzzing around the Biden administration and the Department of Defense as well. Biden’s 2020 campaign, for example, used an analytics company backed by Schmidt. Then, we find that “Biden’s transition team has already stacked its agency review teams with more tech executives than tech critics,” and that Schmidt “has been making personnel recommendations for appointments to the Department of Defense,” working with “former deputy secretary of defense Robert Work” to brief “the Biden transition team on national security issues.”
Since then, Schmidt’s firm, Schmidt Futures, inappropriately funded dozens of scientific jobs in the Biden administration, and just a few days ago Schmidt sent a memo to the President and to Congress about establishing a “‘Defense Experimentation Unit’ to more deliberately enable deep and accelerated exploration of generative AI capabilities.” This week, he is to be part of Senator Schumer’s “AI Insight Forum,” from which of course the public and press will be barred. None of Schmidt’s more recent forays into AI, however, are mentioned in the attendee list—he is simply referred to as “Google - Former CEO.”
(It is also worth mentioning that the recently-departed CEO of Schmidt Futures, Eric Braverman, also funded “a number of organizations in the effective altruism community” according to Will MacAskill’s researcher and self-identified longtermist Pablo Stafforini. Before his tenure at the firm, Braverman directly managed Schmidt’s family office, and earlier in his career he was “a partner and co-founder of McKinsey & Company’s government practice,” a lucrative business endeavor with highly-questionable results. The McKinsey alum seems well on his way to applying the techniques he architected to transfer political power to economic interests in what may be a final and permanent way.)
Kissinger, meanwhile, still has broad influence in direct connection and/or in ideology. For example, the Johns Hopkins’ Henry A. Kissinger Center for Global Affairs has been hosting the “America and the Future of World Order Project,” an “off-the-record Study Group” which is “[i]nspired by the world-renowned Harvard international seminar created by Dr. Kissinger in the 1950s and 1960s.” It is worth noting that, back in the mid-twentieth century, ”Kissinger financed the program completely through grants from private foundations, and in 1967 several of those foundations appeared on the Times’ list of CIA conduits,” and that, while Kissinger denied the CIA connection, “the disclosure … of a 1953 Federal Bureau of Investigation (FBI) document shows that Kissinger consciously sought and directly worked with the FBI while at Harvard. His contacts with the FBI in 1953 imply a much earlier and more thorough understanding of FBI operations than Kissinger claimed in his defense against Morton Halperin’s charges of illegal FBI wiretapping.”
It’s funny that the CIA may have been funding Kissinger’s seminar held on domestic soil, as we find that William J. Burns, the current director of the CIA under the Biden administration (and friend to Epstein), was one of the members of the seminars’ revival. Also in attendance: Jake Sullivan, Biden’s National Security Advisor; Derek Chollet, Counselor of the State Department; Amanda Sloat of the National Security Council; Kelly Magsamen, Chief of Staff to the Secretary of Defense; and Kathleen Hicks, the Deputy Secretary of Defense.
Hicks herself has been in the news quite a bit lately. In an interview with Jon Stewart in April, Hicks—who is also a former Henry A. Kissinger Chair at the Center for Strategic and International Studies (CSIS)—laughed nervously through basic accountability questions about food insecurity in the military. Then, less than two weeks ago, she announced the Replicator Initiative in which “the Defense Department will field thousands of autonomous systems across multiple domains within the next 18 to 24 months,” focused on, as Hicks herself put it, “attritable, autonomous systems in all domains”—i.e. “platforms that are unmanned and built affordably” such as UAVs. Development of these technologies will rely on the “uplift and urgency of the commercial sector” to ”galvanize progress in the too-slow shift of U.S. military innovation.” (Given Silicon Valley’s penchant for sci-fi references and its fascination with naughty AI, let us hope that this is not a Stargate: SG-1 reference).
Simply maintaining budget to use more UAVs for a v1 doesn’t seem to warrant a flashy new initiative months after failing to defend budget issues. Where may Hicks and others take the mandate of putting fewer people in the line of fire via “autonomous systems” made by the “commercial sector”—especially given the uncanny timing where Schmidt’s memo calling for a “Defense Experimentation Unit” came just days after the DoD “Replicator Initiative” announcement?
Hicks’ speech also refers to the “tactical edge”—a concept introduced by MITRE in 2007 which draws very explicitly from the service-oriented architecture approaches of modern internet infrastructure. There is no issue with MITRE’s work or the metaphor being used broadly in military strategy, but there is some concern about the possibility of the military continuing to draw on more disingenuous Silicon Valley concepts. The DoD has for some time been focused on a new project, “Joint All-Domain Command and Control” (or JADC2), which will “leverage Artificial Intelligence and Machine Learning to help accelerate the commander’s decision cycle. Automatic machine-to-machine transactions will extract, consolidate and process massive amounts of data and information directly from the sensing infrastructure.”
Regarding JADC2, Hicks stated last year that “I — neither the secretary nor I are satisfied with the — where we are in the department on advanced command and control,” that “what I’m really focused on right now is taking that — those, sort of, the good work that’s going on and scaling it to the enterprise level.” Furthermore, “that’s not going to look like a major hardware program … . This is really a software-centric enterprise problem, and our approach will look like that.”
Indeed, a new DoD office, the Chief Digital and Artificial Intelligence Office (CDAO), was established last year. Its leader? A former machine learning executive from Lyft and Dropbox! When a reporter asked why “it’s no longer a three star general that’s over it, it’s a CEO from a tech company that’s now leading all the AI efforts,” Hicks replied:
We had JAIC [Joint Artificial Intelligence Center], which was cooking along on AI, and we had DDS [Defense Digital Services] which was taking design thinking. And sort of as I responded to you before with regard to the, sort of, the innovation ecosystem more broadly, there was a developmental stage where that was the right approach. I think we are past that developmental stage and now we are at a stage where, if you’re going to make the big move on that decision advantage, on JADC2, you have to bring those pieces together.
It is no longer going to work to have the data and AI folks working separately, for example, even if they’re highly companionable. So what we did is look at best-in-class design on the outside, I asked for a few studies on that, received some feedback, talked to a lot of people, including the AI Commission Leadership, Eric Schmidt, for example, and others, Bob Work — just a lot of that, kind of, outreach to come to the point of recommending to the secretary that we have the CDAO outcome. Then it was about finding the right team, and I stress team.
What does all of this mean? In short: the DoD is building a new command-and-control system based on AI technologies, various tech teams are being consolidated under the new CDAO division, and Eric Schmidt and his partner Robert Work have been advising on key decisions of this process, including the appointment of a Silicon Valley insider to be in charge of the division. An unsettling prospect, especially in light of Senator Tuberville’s borderline traitorous refusal to confirm key military leaders, including members of the Joint Chiefs of Staff, perhaps opening the beginnings of a “market gap” for AI driven replacements.
Our defenses could be weakened, both by intermediation between the military and its own technological capabilities and advancements and/or by instilling an ideology of deference to the private innovation sector. In many ways, our defense leaders’ capitulation to these economic powers is already well underway—as Michael Madsen, acting director of the Pentagon’s Defense Innovation Unit, recently stated about the SVB collapse: there is an “opportunity to really get serious about growing that connective tissue between the national security enterprise and the commercial capital markets … and show that we’re good and sophisticated partners.”
It’s crucial to avoid a Trofim Lysenko-style mass casualty incident—one in which famous opportunistic individuals are given undue influence over public technology initiatives after selling government officials on hyped products and methods. Taking into account the numerous cozy relationships between political leaders and tech industry magnates—Senate Majority Leader Schumer, for example, single-handedly killed substantive antitrust legislation and momentum just last year—many, many pieces seem to be well in place for the eventual replacement of our troops by autonomous systems designed alongside or by private-industry radicalists: an army with no loyalty or judgment, no ethical capabilities, and no ability to uphold oaths via acts of will or be inspired by loyalty to country, ideals, or justice. The number of agents in power who are capable of making and upholding oaths to defend the Constitution could grow ever smaller.
There are potential threats on our soil as well. Will these systems, like other military materiel has, eventually make it into the hands of untrained police forces to be used against American citizens? And what can we make of the ideology of Hick’s successor of the Kissinger Chair at the CSIS, Michael J. Green, who wrote an article shortly after Trump’s election hoping that “[n]ew leadership may emerge precisely because the liberal democracies have something fundamental their citizens will want them to defend,” that this new epoch of leadership must deal with “domestic institutions [which] continue to disappoint populations struggling with growing inequality and the diminishing returns of the social welfare state.” Here, he positions “inequality” as a consequence of a “social welfare state,” glossing over exactly how that inequality occurred in the first place (for example via wage stagnation), who caused it, and who benefits from it. He suggests, “[w]e now need leaders who can harness their citizens to defend and expand freedom and prosperity.” Note that the citizenry themselves are not to be defended or served. Rather, citizens are “harnessed” to “expand prosperity.” Quite the concerning stance from an institution producing defense secretaries.
Worried? We give an idea on how Schmidt learned to stop worrying and love the bomb—and how you can, too!
Schmidt’s former lover, it turns out, referred to him under the codename “Dr. Strangelove,” a character who many people believe is a parody of Kissinger, a man whom Schmidt later sought out and now with whom he works closely. Then, in a 2021 interview with Maureen Dowd about his work with Kissinger on AI, Schmidt’s recommendation on where to start with national security concerns regarding China were, “[t]he first thing for us to look at between the U.S. and China is to make sure there’s no ‘Dr. Strangelove’ scenario, a launch on a warning, to make sure there’s time for human decision making.” Sound enough advice in isolation, though frighteningly obvious given that nuclear war has been avoided more than once already by humans countermanding automated systems. The warning is especially surreal coming from two Dr. Strangeloves themselves.
We’ll leave it to the psychologists (and sexologists) to unravel whatever the hell is going on between the pair. In the meantime, let us hope, at least, that none of the defense officials they have helped install become concerned with “loss of essence” via “precious bodily fluids.”