The European Data Protection Board announced a record $1.3 billion fine on Meta, parent of Facebook and Instagram, for illegally transferring data on European Union nationals to the United States. In addition to the basic facts, I review the abundant and important backstory for these events going all the way back to Edward Snowden's surveillance revelations, the history of data privacy frameworks going down in flames following lawsuits by Max Schrems, and the complicated political intersections that drag this all out forever including the unique role played by regulators in Ireland.
Juicy tidbits from the history and philosophy of AI
The history and philosophy of artificial intelligence has some surprisingly useful nuggets in it that orbit the question: Is human intelligence "real"? What is it?
I talk about the two previous cycles of AI hype and collapse into "AI winter" first in the 1960s and then in the 1980s. In the former, academic philosophers like Hubert Dreyfus brought interesting critique orbiting around issues like misunderstanding of human consciousness (in a sense still our only available prototype for AI) and how difficult it is to separate what we understand as intelligence in ourselves from the visceral desires that come from our body. I would say these criticisms anticipated the use of reinforcement learning, among other things, and they might have more practicality in them than you might think.
Fear marketing in artificial intelligence
Let's peer behind the veil at the marketing aspects of artificial intelligence (notably LLMs like ChatGPT of late) and observe how much of the psychology of fear is in play. Is this something we should snap out of a bit? Imo, yes.
I discuss how the concerns of Geoffrey Hinton, the reported "Godfather of AI" who left Google to talk about the dangers of AI, actually differ very little from the public statements of top executives at companies like Microsoft and his former employer. What gives? It appears fear is great marketing for AI, and I go into further detail on why the widely discussed potential pause on AI research is probably mostly conscious marketing by the public figures discussing it.
Samsung !bans! generative AI tools like ChatGPT
It was reported yesterday that Samsung has banned internal use of generative AI tools like ChatGPT, and for exactly the kinds of reasons I have been discussing in these videos. They have information they would like to control, and this is difficult if employees send that information to a third-party like OpenAI that has yet to prove an ability to protect it from criminals. Privacy-enhancing technologies, perhaps homomorphic encryption and federated learning in this example especially, are the hand-in-glove solution to this problem, and they even clean up some of the ethical issues as well.
ChatGPT Data Breach!
Open AI confirmed reports of a data breach involving ChatGPT today. Apparently criminals were able to access other users' chat histories (and maybe thus glean interesting intel on their employers) as well as cardholder names, last 4 digits, and expiration dates of payment cards (more obvious ouch!) for the small minority of users presently paying. If you were excited about using these kinds of tools at work, this is a risk you need a plan to manage. It is going to happen again, to OpenAI or a similar firm with a similar offering, and its going to be an important part of the story of this technology. We will be putting out more content about events of this nature and best practices for keeping yourself out of related trouble.
An example of where Chat GPT is a good, SAFE idea and why
Sometimes all the theory in the world can't open your mind like one example, and thus we've taken steps to provide tangible examples of how to use, and how not to use, generative AI tools like Chat GPT. As before, my main points of emphasis are:
you need confidence you have a method of cheap quality control
you need to be sure you aren't telling Open AI secrets you shouldn't
As I announced yesterday, we are creating a basic iOS app, really to showcase our privacy (and cybersecurity)-enhancing technology suite Ghost PII, but along the way we want to provide other useful conversation. Here we are getting started writing basic code for the app in Swift using Chat GPT and outline why this is an appropriate application for us relative to my two criteria above.
Private iOS apps with Ghost PII, plus generative AI best practices
This quarter we will be releasing a series of posts and videos on building an iOS application using Capnion's tool Ghost PII to add a very high level of privacy (and also cybersecurity) protection throughout the entire stack so not even the application server will be seeing user's data in plaintext. Along the way, we will both be utilizing generative AI tools like Chat GPT where appropriate and also expounding best practices about where is appropriate vs. inappropriate and why. While I have you, I couldn't help but comment on some stories about the "Godfather of AI" Geoffrey Hinton leaving Google to free himself to comment on the dangers of artificial intelligence. As usual, my angle here is that we are actually just noticing a problem we have had for a while.
Your boss might not be ready for Chat GPT even if you are...
We are here talking about how useful Chat GPT will be at work, but there is a conflict brewing about whether you should decide to use it at work all on your lonesome.
It might be there are privacy concerns about data involved, your questions might suggest things about your employers intellectual property, or it's just that your sudden dependence on Open AI’s cybersecurity makes someone nervous. You could be an employee using a chat bot now when you really shouldn't be or you could be an executive sitting on top of newly opened abyss of cyber risk you haven't yet mapped out well.
In addition to pointing out this rather prosaic issue in detail, I note...
how we have de facto grandfathered older practices around using Google or similar,
the probably surging value of our chat bot queries as a data asset,
and why these concerns actually relate a lot to privacy-enhancing technologies like homomorphic encryption in particular.
Software engineering was already prompt engineering
I have some optimistic thoughts for the software engineering crowd especially about what the putative artificial intelligence workplace future created by Chat GPT or more specifically Copilot might look like. The story is really just that a new kind of programming language is emerging, not really that qualitatively different than what has come before, and if you have even a little bit of polyglot in you then you are already playing the right game.
I also point out a couple pitfalls suggested in this analysis - I have met people that make a lot of money as the last person working in an ancient language, and I have had some unpleasant adventures with programming languages that wanted to do too much and were too eager to accept sloppy instructions.
If you are a Python developer interested in new things, I also have an advertisement for you.
Model drift vs. predicting stonks from Federal Reserve minutes
The artificial intelligence future-world we are all imagining in the wake of LLMs like Chat GPT will not be as shiny as many imagine, nor as apocalyptic, because the more real the hype turns out to be, the more it will undermine broader social institutions it needs in order to make sense. You will definitely still have a job and while you may want to put some kind of future prompt engineering skill on your resume, it will probably be a virtue you possess today that gets you over the line and onto the payroll.
In making this point I touch on things like...
the 2006 classic Beerfest
moral crises of garbage information like QAnon
reading Federal Reserve minutes quickly to interpret stock movements
the "beauty mark" as a profound quirk of human cognition
My main argument, once again, is that model drift will be a severe problem for these technologies long-term.
How to cope with the AI revolution that isn't coming how you think
I will tell you the right things about how to cope with the artificial intelligence revolution that isn't coming in exchange for your ears regarding the privacy revolution that can and should be. I also touch on topics like...
impostor syndrome
market power in big tech
the tight incestuous circle of power that is the apex of Silicon Valley
good old fashioned subconsciously dumping your pain and fear on others
If you are still reading, good job! Your reward is that the end of the story is that you should try to make a friend and read a book today. It was good advice yesterday and Chat GPT will help make it better advice tomorrow.
ChatGPT is roughly as good as these things will get
If the hype about models like ChatGPT is real, then these models are now about as good as they will ever be and are set to start deteriorating in the years ahead. If you are a machine learning practitioner, the short story as to why is [model drift] + [internet culture] + [fame and adoption]. If you are not, you can learn more via this video.
A painful legal either / or around ChatGPT and generative AI in general
Artificial intelligence models that require huge and diverse datasets to train, like ChatGPT for example, seem poised to place their creators between the proverbial rock-and-a-hard-place regarding a couple of important legal issues. On the one hand, it is very imaginable sooner or later someone sues because of bad AI advice, and in that situation it would be great to invoke something like Section 230 and claim the fault lies with the training data you got from someone else over the internet. But then... people have already sued because they feel like their intellectual property was infringed by the training process, and in this situation the OpenAI's and Google's of the world would love to claim the AI's output is original work that they own. These are unpleasantly contradictory stances...
The newspaper is 100% pandemic hangovers
I was browsing the "LinkedIn News" and it seems like basically everything we discuss these days is really a lingering legacy of pandemic somehow. I cover some of the basic, and very extreme, macroeconomics of the lockdown phase before examining Big Tech layoffs, the strength of the more blue-collar labor market sector, who is and isn't working at all, banking crises, housing speculation, and (most importantly) the significance of who is paying how much for what part of the chicken.
Legal wrangling in the ongoing DeSantis v. Disney confrontation
Check out my update on the recent Disney vs. DeSantis feud regarding the Disney's past and maybe future self-governing district around their amusement park in Florida. I cover the last minute and sneaky but also embarrassingly broad-daylight contracts that may allow Disney to maintain control in all but name, a semi-arcane but important legal rule called the "rule against perpetuities" that has people asking "Why King Charles?", and speculate on the optics of what will happen with the dispute going forward.
Controveries regarding ChatGPT creator OpeanAI
If large language models like ChatGPT even halfway fulfill their current hype, the people that build them will acquire a level of control over how we consume information that will make us long for the days when we were just freaked out about Google and Facebook. Thus, it is important to understand the institutions that build and control them. In this video, I will cover some controversies about OpenAI including their awkward combo for-profit and not-for-profit status, why Elon Musk is salty with them, whether telling people they can "only" get 9900% returns on investment makes sense, and finally I beg the legal community to help me figure out why I feel so sketchy about their governance situation.
The real reasons AI may make you miserable in the end
Take ~9 minutes to learn the real reasons why the artificial intelligence revolution embodied in technology like ChatGPT.
Don't worry you will be left in material poverty, worry you will be left in cultural poverty.
In praise of middle management
Out my eyes, in both my lived experiences and anything I ever read that I thought was even semi-relevant, middle management is actually the layer that most determines a good and bad outcome. I perceive not a whole lot of people share this view and I would like to persuade you. I hope you believe me I could tell a lot of stories from my own career, but to keep things on neutral ground I give a number of examples from military history you can read about in a book and that I think tell a story very relevant to large corporations.
Huge boobs and no hands
A polemic on why generative AI is less mature than you think, and yet why it can still rule social media.
Data sovereignty explored via TikTok, China, and a Dutch class-action lawsuit
If you ask me, data sovereignty may turn out to be the real exciting 21st century data business headache. I examine these issues via current controversy involving TikTok and Bytedance, the (mostly unsung) broader history of similar conflicts with China with roles both the same and reversed, and also the actually very similar issues with Meta and Facebook in Europe explored via a recent Dutch class-action privacy lawsuit.