A Simple Take on What Does Responsibility Means for Artificial Intelligence
It really comes down to getting a good night's rest.
Morning y’all!
Busy, busy, busy is what my calendar looks like and the pressures of building and starting anything new are, indeed, pressing. But, I think it’s important to carve out time for the things that give you life, the things that make you happy.
As simple as this might sound I imagine that you have struggled with balancing the things that need to get done and the things that you’d like to get done; ironically, as I’ve gotten older I’ve realized that taking time for oneself should sit in the bucket of “need” more than any other bucket — what is often missing is the right psychological framework and then the discipline to do it.
Writing, for me, is one of those things that I feel could quickly (and easily) take a backseat on the laundry list of things to do every single day but I know that I always feel much, much better when I do it and, like exercising, is something positive and compounds over time.
I won’t bury the lede here because this is effectively my overly-simple thought pattern for how artificial intelligence should be handled, at least in terms of responsibility.
Three events that have surfaced recently are the Elon vs. OpenAI lawsuit, the Google Gemini debacle, and Microsoft’s more recent embarrassment with the FTC and their Copilot product that doesn’t seem to protect users.
On the first we’ve recently seen a blog post via OpenAI that attempts to counter the claims that Elon has made about the organization being anything but “open,” especially as it relates to open source and profits. The high-level are as-follows:
Emails show Musk telling OpenAI to raise a lot of money ($1B+) pushing the non-for-profit towards money. This isn’t necessarily about profits though. Musk even suggests that Tesla should acquire OpenAI in 2018.
One email details how being “open” refers to AI benefitting the world and the greater humanity, not being open source and sharing the technology which is laughable to most in the OS community.
Musk, undefeated as always, tweeted that he’d drop all of this if they renamed themselves to “ClosedAI”. Hah.
There will be much more to come from this public drama and it’s anyone’s guess as to how it’ll ultimately pan out, but it’s telling that one of the richest men on the planet has simultaneously applied pressure to an obstensibly open source company.
More recently a software engineer at Microsoft informed the FTC that the Copilot Designer product is effectively dangerous, citing that the tool produces outcomes that show graphic violence, underage drug and alcohol use, (political) bias, copyright violations, and more:
The AI service has depicted demons and monsters alongside terminology related to abortion rights, teenagers with assault rifles, sexualized images of women in violent tableaus, and underage drinking and drug use. All of those scenes, generated in the past three months, have been recreated by CNBC this week using the Copilot tool, which was originally called Bing Image Creator.
Whoops. Apparently Shane Jones asked the company to put more safeguards in place but was ignored, thus the escalation to the FTC. The obvious subtext is that Microsoft is more interested in rolling out their product for revenue generation more than executing against their own “responsible AI” principles.
And we’ve already covered Google’s Gemini insanity in which I stated that responsibility is a “community effort” — but I’d like to add to that insofar as there is a clearly a level of personal responsibility as well.
As I stated in the beginning I believe that we’re all called to make informed decisions around our own health and well-being which, in turn, informs and contributes to the larger community in which we sit.
With Elon and Shane we see individuals making publicly visible discrepancies in what is said and what is actually being done and the delta between the two is the tension that we all must sit with, especially as this technology becomes a bigger and more integrated part of our world.
In the former case Elon has, at least on an individual level, very little to lose but Shane, not having billions in the bank, will more than likely lose his job as a result of his report to the FTC. Ethics is a sliding scale and we each have to weigh what we’re willing to lose in contrast to what we may personally gain, even if it’s as simple as getting a good night’s rest as we review the work that we’ve accomplished that day.
The frontier that is artificial intelligence is precisely that: A frontier, and if we take that analogy farther we understand that any frontier is dangerous, littered by dead humans, ideas, and dreams. What is most important for me is that my health isn’t impacted by the work that I do, that I get a good night’s sleep and that I’m available to the people that I love and who need me.
You may not be able to trust anyone else’s intentions with regard to artificial intelligence but you can begin to trust your own instincts as to how much you want to participate, either building products and services explicitly in the space or more simply a user and participant.
Like most technology there are always alternatives. Let your own conscience guide you. Your time is your time and you’ll never get it back. Being responsible (with AI) comes down to that. Where and how you invest your time is indicative of what you believe and a signal of where you sit on the ethical scale.
Just make sure you do it for yourself and no one else.
※\(^o^)/※
— Summer