It is not terribly hard to see that generative AI is likely to put a lot of pressure on the music business. Music, as a business and cultural institution, is actually a great case study looking backwards as it has been disrupted again and again by new technology with consequences that are an accepted part of your everyday life.
Is AI smarter than people? It certainly is when you can't hold people's attention
Are recent, state-of-art artificial intelligence algorithms smarter than people? I would say AI is still pretty shabby relative to motivated humans that are paying attention, yet motivating humans to pay attention is a hard problem in itself that we all spends a lot of time discussing in frustration.
Intellectual property is a freakish modern invention
It's not hard to see that artificial intelligence is going to put a lot of pressure on our existing system of intellectual property law but easy to forget that that system is hardly eternal. I weave together diverse strands like Homer, James Joyce, cryptocurrency, Immanuel Kant, and the human author as a large language model to make a point about how the status quo we are afraid to leave is actually a pretty fleeting and weird moment in history.
The AI unemployment apocalypse that isn't
...a silly case study about shoes explaining why the effect of artificial intelligence on labor and unemployment won't be too bad in the long run...
Breaking down ChatGPT Enterprise
Let me give you the high-level story on OpenAI’s new enterprise tier for ChatGPT, including...
...the data privacy and cybersecurity they are trying to assuage...
...the legal concerns that are mostly still at large...
...some very nice although not radically different new features...
...and the landscape of competing goods like the recent new Microsoft Azure offering and the zoo of open source models like Llama 2.
Don't let ChatGPT trade cryptocurrency for you!
I'd like explain why letting ChatGPT trade cryptocurrency for you is a very terrible idea especially because it provides a great prototype for recognizing other terrible ideas...
I was just making a Capnion TikTok account, so you can watch there if it is your preferred platform, and while the advice I observed was terrible it provided great ideas for videos. In this particular case, the story is that the cryptocurrency universe is full of propaganda for pump-and-dump scams and all ChatGPT can realistically be doing is uncritically passing through bad information it scraped off the internet. You are much better off closer to the source because you can take a guess at what manner of game the source may be playing.
Porn is the elephant in the room of the AI regulation conversation
Porn is the elephant in the room of the "social consequences and regulation of generative AI" conversation. I feel a little awkward bringing it up myself, but I think it needs doing. There will be huge economic incentive to turn these new tools to these purposes, and then incentive to always go a little farther down one unspeakable road or another. More than any other area, regulatory inaction here appears to me as unacceptable period yet this is an area polite conversation likes to avoid.
Why Etsy and artisanal mayonnaise are hints AI will not end the world
If you are anxious about the artificial intelligence apocalypse this meditation on Etsy and artisanal mayonnaise might help you see some silver lining. In many ways, you already live in someone else's dystopia and carry their fond memories of another time... This is to say that the world changes and change can be painful but the world is not going to end.
Zoom and the risk in using sensitive data to train AI
I had to weigh in on the Zoom fracas... If you didn't hear, Zoom updated their policies with a clause permitting them to use your datato train artificial intelligence models. In this video, I summarize the basic facts and give an analysis of what sort of marginal data privacy and cybersecurity risk you pick up when your data when a company like Zoom uses your data for these purposes.
Amazon travel guides and generative AI supply gluts
I've been trying to warn you of generative driven supply glut issues for a minute and how you are going to see more of them. The example du jour is the widely discussed plague of inferior travel guides on Amazon, their legions of bogus positive reviews, and the likely possibility that the sudden volume is only possible because they are written with aid of artificial intelligence.
Do angels have faces? A prompt engineering pitfall
Do angels have faces? It depends on whom you ask, and this is a good window into some practical challenges in prompt engineering ...
In this video, I walk through an example involving OpenAI's text-to-image generative AI model Dall-E 2 that went badly. I am guessing that I got a blend here of the modern folkloric angel, which has a (usually attractive) human face, and the Old Testament angels that more resemble gyroscopes. The point here is that there can be subtlety in surprising places, like whether you would like your angel to have a human face or not, and getting these things right can require knowledge from weird corners.
The 1983 video game crash and potential AI supply gluts
In this video, I discuss the video game crash of 1983 and explain why this sort of supply glut crisis that you should watch for as artificial intelligence adoption progresses. We much more commonly talk about consequences for the labor market and unemployment, but there is also potential for chaos stemming from the sudden appearance of lots of low-quality goods and some evidence this is already happening in areas like fiction writing.
This crash spiraled into the collapse of Atari, set the stage of Japanese brand Nintendo to become a fixture of American culture, and inspired business practices around licensing and marketing of video games that structure the industry to this day.
Strategic ramifications of the Meta / Microsoft and LLaMA2 / Azure Collab
Meta has announced they will be partnering with Microsoft to offer their new open source large language model LLaMA2 inside the Azure cloud service. As I have discussed in other videos, open source models are probably the viable lane for companies that feel the need to own and control their own LLM and the first LLaMA attracted a lot of attention and has produced many descendant models. It may be true that inclusion of LLaMA2 inside Azure solves a lot of problems for these organizations and I think this story may be an interesting signpost in the broader corporate artificial intelligence adoption story.
There were a lot of startups looking to solve these problems and they may have a problem themselves now... This partnership also resembles a common story in past antri-trust criticism of Big Tech as well and could be important for these reasons if the Department of Justice grows teeth at some point in the future.
DeSantis Deepfakes Trump! Again...
The DeSantis campaign has once again released a campaign advertisement containing a deepfake of GOP primary front-runner Trump, in this case featuring an audio clip (manufactured using artificial intelligence and NOT a real recording) of a Trump-like voice discussing Iowa governor Kim Reynolds.
Lina Khan's FTC probes ChatGPT maker OpenAI
Lina Khan’s FTC is opening a probe into OpenAI looking to answer questions about data privacy practices, potential cybersecurity issues (there has been a data breach of ChatGPT user information already), and most notably whether their large language models could be damaging individuals' reputations by "hallucinating" false negative statements.
Some basic conversation about LLM performance metrics
Lately open source large language models proliferate like the bunny rabbits in my yard and worse than that is the specter you might have to find a good reason to pick one over another.
In this video, I recommend HuggingFace's LLM leaderboard as a place to get started. It ranks these models according to a number of metrics with the widely-discussed Falcon presently leading in average score. I will drill into these metrics a bit... Just what do they measure and how? How might relative performance be different in the applications we are pursuing at my company? One point I would like to keep re-emphasizing is that the breadth and flexibility that might have attracted you to OpenAI's ChatGPT will not only be one of the more difficult qualities to reproduce in house, but it is also likely to be one of the more difficult qualities to understand via a standardized test.
Want your own LLM? Some thoughts on the ins and outs of open source options
Does $200k and 9.5 days to train a model sound cheap and fast to you? This is a serious question, and you should also consider the odds that your first shot hits the desired target.
In this video, I begin my promised discussion of open source large language models. If you are an organization that wants to build its own LLM, refining an open source model looks like the right path to me at the moment. However, I see a lot of danger that it could be more expensive than some people think, more poorly performing, and leave you with some of the problems that scared you away from refining a model like OpenAI's celebrated ChatGPT. If nothing else, you will have a lot of decisions to make and giving you context for those decisions is what I hope to do here.
If you are not careful, LLM will mention exactly what you tell it not to
If you say "Don't mention topic X" a human adult will usually oblige you but a toddler or a large language model will often turn around and mention it immediately.
It is a surprisingly tough to keep a generative AI tool like ChatGPT from NOT doing a particular thing and often a ham-handed attempt will actually solicit the forbidden action. In this video, I give an example prompt that demonstrates this unfortunate property and hopefully also manage to give you a little prompt engineering intuition about why things work out this way.
It's hard to tell artificial intelligence NOT to do something
Do you ever fill a closet with freshly poured glasses of beer? I don't think many people do, but it may be impossible to find an image for your generative AI widget's training data that embodies a particular strange thing not happening.
In this video, I examine an example of an artificially generated image that went a little bit wrong in an interesting and subtle way. I use it to explore some themes around prompt engineering, notably that it is problematically difficult at times to give negative (DON'T do this) instructions and also that it is important to remember the ways in which artificial intelligence does not really understand what it is doing.
Why LLM is big, shadow IT, and what is realistic in-house
It is undersold that tools like ChatGPT, when compared with some of the larger universe of things we might call artificial intelligence, is very easy to get started using with no special expertise and help from other data and IT systems at a business. This is probably enough to make it a born shadow IT classic, but it also appears to me at this (certainly fleeting and soon to evolve) moment that efforts to provide similar in-house tools to compete with the shadow-IT-on-phone option could turn out to be complicated and ultimately disappointing in performance.