#ArtificialIntelligence is different from previous hype cycles in how it is likely to touch about everything in the white-collar world eventually. There is much to be said, though, about how fast this happens for any given function or organization. In this video, I discuss differing cultures around #ROI, the difficulty of measuring #productivity, and share some factors I expect will shape the pace of adoption.
What does it cost? The small details of AI project complexity
"What does it really cost to build and run this?" is a very sensible question to ask yourself about your nascent project #ArtificialIntelligence project involving #LargeLanguageModels (#LLM) or #GenerativeAI more broadly. More than ever, the answer to this question might depend hugely on the fine details of what you want to achieve. In this video, I talk about how the guardrails you put on your implementation (data privacy, vendor stability, and quality control) can quickly add complexity that will put pressure on your bottom line.
Your organization needs to change to leverage AI
If you have worked in a large IT organization you have probably had an experience with "blockers" - you were in a position where you couldn't advance a project because you were waiting for a key someone else to do something. This is a hint at the big challenge of #ArtificialIntelligence from the #management perspective: tactically improving productivity here and there by buying this or that tool may not get you to a punchline faster, and really seeing those gains will require painful reforms in how the organization is run. In this video, I talk about some recurring conversations I have had with everyday people and build you a sort of management fantasy to provide some hints on what you will need to do to stay in the fast lane.
Analysis: The New York Times sues OpenAI
#GenerativeAI and the #LargeLanguageModel are generating landmark, interesting lawsuits if nothing else. In this video, I discuss the recently announced #NewYorkTimes lawsuit targeting OpenAI and its incorporation of copyrighted material into #ChatGPT along with
backstory on the nascent business of selling news for AI
downstream risks for business and IT leaders interested in #LLM
#Reddit as a problematic sleeping giant of #ArtificialIntelligence.
Ugly news on the 23andMe breach + how to protect yourself
There have been some ugly new revelations of late regarding the #DataBreach 23andMe announced Friday with the total number of customers compromised rising to almost 7 million. I cover recent news and basic facts of this breach with a special emphasis on consumer behavior and what to think about if you have been affected. #CyberSecurity #DataPrivacy
Anthropic is a rather unusual public benefit corporation. Is this good?
This next chapter in my series on unusual #CorporateGovernance in #ArtificialIntelligence concerns #GenerativeAI and #LargeLanguageModel vendor Anthropic and it's unusual public benefit corporation (aka #BCorp) structure. I give a bit of context on the ?why? of the B-corp, discuss it's limited history, and discuss how it might increase vulnerability to some standard corporate governance problems. It is worth emphasizing that it is a new thing, so there is not really much history to underwrite these objections, yet novelty per se is not a great quality in corporate governance.
You might consider these issues relative to the tumult at OpenAI which was certainly fueled in part by that other company's own unusual (if different) governance structure.
What medieval 100 Years War can tell us about OpenAI
What can the medieval 100 Years War between #France and #England tell us about OpenAI's boardroom drama + dramatic (un-)firing of #SamAltman? I peer into the distant past in hopes of giving you a useful brainstorm about the impending future.
The puny shark attacks of Jaws and AI doomsaying
What can the impact of the classic shark attack film Jaws tell us about #ArtificialIntelligence doomsaying?
I envy you the sturdy rock you slumbered under if you did not hear about OpenAI's recent surprise firing of CEO #SamAltman. It seems that at this point no one on the outside has quite the confidence they'd like about the rationale, yet all accounts make it clear #AISafety concerns we're an important part of the story. It attracts my attention broadly how people seem eager to participate in something they believe is so likely to produce an extinction level event, and how little these dire warnings seem to affect people's behavior. Everyone, it seems, would like a job ending humanity.
This video contains what you might call a cultural analysis of doomsaying broadly and I hope to provide some gentle suggestions about why we think about the end of the world much more than we seem to take it seriously in practice.
A prosaic idea for AI application with poetic gravity
#ArtificialIntelligence is a term that covers a lot of ground and not everyone agrees what it means. To find success in a corporate, private-sector setting you will need more specific ideas, yet you may still see a representative of your company on Bloomberg suggesting #AI is the reason your stock price should go up. Being greatly blessed with vision, I have an idea to share with you that might help you solve this problem. Better yet, I will articulate a familiar problem, a realistic yet sexy AI solution, and even a narrative as to why it could all spiral into lucrative megacorporate domination down the road.
We are often in positions where we have no choice but to produce lots of documentation about how to run the corporation, yet these efforts often undermining themselves when the mountain of documents is so large no one has time to sift through it. #RetrievalAugmentedGeneration (#RAG) is an approach to making these mountains more accessible to everyday people that involves sexy cutting edge techniques like #LargeLanguageModels and #GenerativeAI more broadly yet is also achievable and helps contains various risks and rough edges. Successful use of a technique like this leads to something like AI-as-management-enhancement that could provide a huge competitive advantage in successfully administering a larger and more diverse conglomerate - you might call it a #rhizocorp.
I should note I got the term rhizocorp from Matthew Gladden and I have not been able to discern another source from which he might have taken it. Credit is his pending further information.
What a weekend at OpenAI...
If you are behind on current events let me catch you up OpenAI's surprise firing of #SamAltman and the wild boardroom intrigue over the weekend. I will do my best to summarize events, analyze some of the players and their angles, and finally break down how these might matter to you. Short story: there was a clumsy midnight coup fueled by a known bad corporate governance situation, everything spiraled out of control, and lots of people aren't going to enjoy their holiday.
- OpenAI is hemorrhaging talent, has a board with no credibility with anyone, and will be lucky to avoid heavy-duty shareholder lawsuits. They could spiral into nothing.
- OpenAI's investors, outside of Microsoft, got screwed over to a semi-illegal degree and don't have great options. One not-great option they are talking about at this very second is shareholder lawsuits.
- Sam Altman is going to Microsoft. He had lots of options and he liked this one. Other people will go with him.
- Microsoft looked like the big loser last night yet they emerge today as the big winner. Altman puts them in a great position to build entirely within Microsoft and become the unquestioned leader in commercializing #LargeLanguageModels
- If you are at a large corporation that uses #Azure, and you were comfortable using #LLM inside Azure, this is a positive turn of events for you in the short to medium term. You are buying it from Microsoft will likely have better luck making it for you in house.
Don't LLM so hard you forget about vector embeddings
If it is news to you, I would like 10 minutes of your time to tell you a bit about #VectorEmbeddings and #VectorDatabases as part of my weekly theme exploring #RetrievalAugmentedGeneration or #RAG (please forgive some past errors where "aided" was my "A" word). The #LargeLanguageModel component will handle the generation, but #LLM is not a great tool for the retrieval. For some problems, vector embeddings are the answer.
I think it is a little dangerous to set out in pursuit of #ArtificialIntelligence. It is much better to have specific goals, and RAG is a technique that fits very well onto some very common megacorporate problems. Specific goals are good because they help spotlight specific tasks that add up to success. Shopping for a vector database is one such task that might turn out be very relevant to you but you will probably not see in the headlines like LLM.
Intro to retrieval aided generation (RAG)
The appearance of a new name is an interesting signal and I am here to introduce you to the term #RetrievalAidedGeneration (#RAG).
This is notably something I have discussed before without a name - you might remember either a widely circulated story on a Morgan Stanley internal #LargeLanguageModel application or my video on it - and I might say the appearance of a new name is a harbinger of the appearance of a consensus best practice. RAG gives you a functionality as if you were using #LLM for expert knowledge but it also gives you control over the the hallucination problem and the provenance of information in general. It is a natural fit onto the basically universal big business problem that documentation of how to run the corporation quickly accumulates in volume beyond what readers can usefully sift through.
Teaser for later this week: one can almost imagine these technologies opening new frontiers in how a large and decentralized conglomerate could be administered as a sort of #ArtificialIntelligence powered #RhizoCorp.
Analysis: AI and the quest to duct-tape our toys together
#OpenAI recently added some interesting new features and this video is about deeply reading these tea leaves for useful information about your own internal AI initiatives as well as the evolution of the broader marketplace for these tools. Specifically, they opened beta testing on a new #ChatGPT feature for uploading and ingesting .pdf files and announced a roadmap for broadening the input and output options in general. In this video, I pursue three lines of analysis...
1) #ArtificialIntelligence is going to need a lot of help with duct-taping itself into other things before it can take your job. The success or failure of these (otherwise very boring) duct-taping operations is what you need to watch.
2) #GenerativeAI is about it in terms of areas in which startups can raise money recently, and many were working similar add-on features. They may be in trouble, and some may have difficulty finding niches that can stay away from this kind of trouble.
3) We already have an aging tradition of stock dialog about how to handle AI inside your corporate organization and I think recent progress has rendered some of it awkwardly half out-of-date. There is a promise that AI can now help resolve data quality issues that previously held you back from doing AI.
Let's turn sorrow to joy. Send me your complaints!
Are things not as they should be? Feel like no one listens? I want to listen!
It's at best a lost opportunity how much we trumpet our real or fabricated success while concealing our stumbles and failures. A story about a way things can go wrong is actually a pretty useful thing, and we're probably destroying each other's mental health endlessly shouting at one another about our flawless success. That being said, I know you all have lots of good reasons you can't complain too hard about your work on social media.
So... complain to me! I won't tell a soul unless I hear similar complaints repeatedly, and if I do I will make a video about the pattern I see and not anyone's individual story. We can share useful anti-knowledge, give some hints at our real selves (at least in large, consistent groups), and you won't have to look like a malcontent in front of your boss.
Do you like this idea? Please send me a message or an email. I want to hear from you!
Vague BS is the biggest danger your AI program faces
I've got the envelope here and the academy says the biggest danger to your AI program is [drumroll]... !vague bullshit!
What is AI? That might actually be a profound question, and perhaps it is like the House speaker race in that the difficulty is certainly not lack of candidates but lack of consensus and clarity. If you are blessed with success, it is going to be fueled by specificity on business value, infrastructure, algorithms, talent [coach’s clipboard folds out and the list drops to the floor]...
Emerging technology in the private sector tends to suffer from a code-switching problem - the language you need to excite and persuade is deeply unlike the language you need to build. If you don't see that both codes are there, and don't know how to speak in each, you may be in trouble.
In this video, I elaborate on these ideas and give a couple examples. One is about the old tension regarding what is fancy enough to call AI. The other is new: some of our older commentary on AI is very appropriate to #GenerativeAI in some applications and very inappropriate in others.
?? LLM = Large Libel Model ??
Does #LLM stand for #LargeLanguageModel or Large Libel Model? In any event, you should take note that law bloggers are telling these jokes.
I discuss a specific lawsuit in #Georgia, Walters vs. #OpenAI, to provide an example of a #ChatGPT hallucination triggering a credible lawsuit. The specific dynamics of this particular lawsuit also provide us an opportunity to drill down on why it matters that #OpenAI is quite unusually incorporated as a for-profit subsidiary of a not for-profit corporation.
Sanity on AI in the Hollywood labor negotiations
Yesterday there were some pleasing signs of sanity on artificial intelligence in the Hollywood labor negotiations. Specifically, in the end it is probably unavoidable for respectable large businesses that...
1. Products made using AI must be "labeled" as such.
2. Some specific human is on the record as responsible for the AI's output.
AWS invests in Anthropic vs. Microsoft invests in OpenAI
I cover the basic facts of Amazon / AWS 's recently announced $1.25 - $4 billion investment in generative AI firm Anthropic, creator of the well-known large language model Claude. More fun: I point out that this is kind of a weird deal that not only mirrors the weird deal between Microsoft and Open AI but has involved many of the same people over time.
Morgan Stanley seems to have the right idea on generative AI
There is a lot of media circulating about a new internal artificial intelligence tool at Morgan Stanley and I've really liked many of the things I have read. Specifically, what I could glean about the architecture fits well into some of my ongoing narratives about how you maybe don't want to directly use your large language model as an expert, but rather you should have it read information from trusted sources and write a synthesis for you to read in turn. In this video, I go into some detail about why this is the case.
Evaluating the hype on new large language models
Here I discuss new or upcoming large language models including Technology Innovation Institute's latest Falcon model, Meta's (f.k.a. Facebook's) upcoming iteration on it's influential LLaMA models, and Alphabet Inc.'s (f.k.a. Google's) highly but vaguely anticipated new model Gemini. My real aim is to discuss how you might evaluate these different models, how the outcomes track the goals of their makers, the potential value in embracing open source models, and what sort of marketing language prevails in this area when test drives are not available.
In the heat of the moment, I also accidentally provide some arguments about naming and renaming things...