
In a 2004 essay titled “Ambiguity & Truth,” legendary designer Milton Glaser presented a list he called “The Road to Hell.” The list served as a moral test for the type of work designers would take on with increasing moral dubiousness.

I liked the idea of his test as a moral compass for acceptable design work. Making the Lucky Charms leprechaun a cartoon instead of life-like isn’t unethical. Designing T-shirt decals for a company like Shein is more questionable.
Over the past few months, I’ve had Glaser’s test in the back of my head. I couldn’t escape the feeling that creating a version of this list for how I use AI would help me navigate the moral quagmire artificial intelligence presents. Establishing red lines would prompt self-reflection. Once I attempted to write something down, however, I felt my mind trying to run away. I believed creating rules around my AI usage would be helpful, yet I resisted. Giving my inner critic new criteria to torment me with felt more corrosive than continuing to live in moral ambiguity.
I discovered that defining my relationship with AI would expose two seemingly contradictory fears: Being too permissive about how I use AI would expose me as a hypocrite, and setting too many guardrails would become a form of self-sabotage. These fears melted into a quantum state of being too lax and too rigid at the same time.
I’m a hypocrite

Calling myself a hypocrite is not self-flagellation. It’s an observation of an inescapable condition from the complex, messy, weird world we live in. It’s unbearable to follow our beliefs and moral standards 100% of the time.
My sensitivity to my own hypocrisy arises from a deep desire to be virtuous. I see being virtuous as the best chance I have in life to be loved.1 And I believed that virtue could only be developed through impeccable integrity—like those vacuum sealed rooms where they manufacture semiconductors where the slightest speck of dust can ruin the entire operation.
But, I don’t live in a vacuum sealed room. My path to virtue has a long trail of fuck ups, disappointing behavior, regrets, and moral inconsistencies. Some are small, like not picking up a wrapper and placing it into the garbage bin one foot away. Some bigger, like the time I got suspended in high school for writing a biting, and homophobic, takedown of my bully on Myspace.2
The big mistakes leave a trail of atonement and disappointment. It also increases my aversion to creating new instances of unvirtuous behavior. Like being a hypocrite.
I procrastinated coming up with my own AI rules because part of me knew that I would inevitably confront not only my hypocrisy with my AI use, but also modern technology.
Take my iPhone, for instance. It has rare minerals. It is highly likely that many of these minerals were mined either illegally, using underpaid and/or underage labor, and its extraction came at great environmental cost. I can refer to this portal of infinite knowledge as a “blood cobalt” device, and it would not be hyperbole.
The same uncomfortable awareness appears when I think about AI. I’ve documented my delight using this technology and some reservations. If I had to assess the net benefit/negative of AI in my life over the past couple of years, it’s been a net benefit. It would have taken me weeks instead of days to research my piece on tariffs. The quality of my writing has benefited from using AI to critically challenge my arguments and the scaffolding of my essays.
And yet, I’m annoyed by how quickly we’ve moved on from discussing the legality and ethics of using copyrighted data to create LLMs (beyond fair use), the environmental costs (which will decrease over time, but the costs incurred should be acknowledged), or how it’s ramped up the sophistication and volume of misinformation.
But here I sit, having used AI four times in writing what you’ve read so far.3
Hypocrisy, then, is not resolved but managed. Living in transcendence of moral dilemmas would require a level of hermitage that I could never endure. The next best thing I can do is not to live in willful ignorance, confront my contradictions, and understand that for my own sanity, there are just incongruences in life I have to accept.
Building a fire with sticks and stones
The second fear is that a pursuit of purity from AI usage would turn me into an Amish—one Rumspringa away from realizing all I’ve missed because I talked myself into technological stasis.
If my day job creating content for businesses that is heartful and not jargon was 100% AI-free, would that be doing a disservice to me? Would that be doing a disservice to my customers, who in general, care less about the process than the outcome? They may not care whether I start a fire with sticks and stones versus a lighter. They just want fire.
To remain relevant in my new career as a content strategist and have a chance of growing out of the “ugly duckling” phase of my business, I feel forced to master AI tools. Otherwise, some dude on Linkedin who boasts about bringing in thousands of dollars a week through his AI-generated outreach emails will eat my lunch and take a victory lap.
The broader subtext of AI adoption that I’m influenced by is that I have to race to maximum proficiency to gain a competitive advantage in using this reality-altering technology to grow my financial and social status. This undercurrent is strong and fraught with recency bias because there’s first-hand evidence of key figures from the dot com boom becoming billionaires with immense power.4 History foreshadows that this new era will also create a new class of fabulously rich and influential winners.
But I’ve come to find the narrative that I have to learn how to use AI well exhausting and reductive.
Recently, I had dinner with Substack star Rick Lewis.5 When I talked to him about my dilemma, he shared his view on using AI:
“I have exactly zero interest in AI for writing. Every goal I have for expression and communication would be undermined by using it.”
His response lingered. None of the authors/writers I grew up adoring used AI, yet they wrote evocative, enrapturing works. Beyond the realm of writing, if AI disappeared out of thin air tomorrow, my quality of life wouldn’t substantially decrease.6
My concocted binary between technological stasis and mindless submission to artificial overlords is a false choice. Finding the synthesis between both poles is exactly why I need my own set of rules. The exercise asserts my agency over this marvelous and dangerous technology. It turns my grapple with this binary from wrestling match into the sweet surrender of embracing duality.
My AI Rules…actually they are principles
Coming up with a list à la Milton Glaser is a bit more complex; design as a field has clearer parameters than AI, the ethical dilemmas with this new technology feel a lot more expansive, and the supersonic rate of development makes it hard to create fixed rules.
What I think serves me better is to assess my usage of AI around these key principles:
AI as augmentation, not replacement. I want to use AI to help me do things better, not do them for me.
Avoid "prompt-and-go" instructions where I tell AI to do something for me. Outsourcing my thinking will erode my critical thinking skills.
Treat AI like a collaborator, not an intern. This approach helps me fully use its power rather than settling for mediocre output.
Don’t ask AI to solve my “blank page” problem. I’ve come to realize that the friction I feel when I stare at a blank page or feel like I’m out of ideas, is a fundamental part of the creative process. Depriving myself of this friction for the sake of expediency will have long-term negative effects.
Moderation is key. Overdoing something, even if it’s valuable, comes with its tradeoffs—many of them hidden.
What does this look like in practice?
It means that I will use AI often for research and fact-checking.7 I will also use AI for editing my writing, though I prefer it for catching spelling errors and explaining grammar mistakes so I can avoid them in the future. While editing, I may ask it for alternative phrasing. I rarely go with what AI suggests, but it will usually trigger another phrasing I hadn't thought about.
I will also use AI for multimedia creation. This one is tricky, because in an ideal world, I would love to pay an illustrator, or have enough artistic proficiency to design the illustrations myself. So I've arrived at a compromise where I will use AI-generated images partially in my illustrations for Tangent, progressively weaning off and doing more things in Illustrator/Photoshop as my skills improve.
I will use AI for vibe coding (using text to prompt an AI model to write code). I confess that I have fewer qualms with AI generating code versus generating words/images/sounds. Perhaps there is an inherent spirituality that I perceive in creating art and literary work that I've never perceived in creating digital products. I don't have a clear answer. I'm still working my way through this moral swamp.
Finally, I will periodically use AI for reflection and as a learning tutor. For reflection, I am comfortable with using AI as a conversation partner inspired by cognitive behavioral therapy. AI as a learning tutor is probably the most benign use case of AI. For example, I created a website using Replit that acts as a AI prompt coach: I feed it the prompt I want to use, I get feedback on how to make it better, and I iterate. Note that I am not asking my AI tutor to rewrite it for me, I'm asking it to teach me how to write it better.
What are my red lines?
Based on my principles, my red lines boil down to not passing AI-generated writing as my own or commercializing AI-generated work that was created by prompting a specific artist/designer (you won't see me selling Studio Ghibli-like shirts on Etsy any time soon).
Also, I will not use AI to generate posts based on my writing or the writing of my clients. I may ask it to brainstorm complementary pieces, but I will not outsource the creation of posts to AI.
***
I expect these principles to evolve as these technologies mature and as I continue engaging in earnest with my moral dilemmas. But after avoiding this exercise for the past few months, I'm comfortable with where I've landed.
My rule-making as a way to break through the binaries is partly an effort to take more authorship of my life as a way to cope with how the world is changing. The open question of what our world will look like in ten years, or even two years from now, creates a kind of vertigo that I’ve seldom felt. In those moments of uncertainty, I imagine myself becoming increasingly irrelevant, outdated, and without any clear sense of direction as to how I’d fit in this new world.
What I gained from this exercise wasn't just the principles themselves, though having them does save me mental energy. It was a reminder of an evergreen lesson that took me way too long to understand: when everything feels uncertain, build the certainty within yourself.
Where are my Enneagram 2s at??
That was the first time in my life I truly understood the power of my words for good or harm, and fortunately led to a more tolerant, aware version of me.
I used LLMs to give me a definition of the word “quagmire,” to challenge my definition and provide alternatives of “hypocrisy,” to fact check whether semiconductor chips had to be built in the pristine environment I described, and to fact-check whether the rare minerals are used in the iPhone.
This narrative neglects the fact that Internet boom figures like Marc Andreessen, Jeff Bezos, Elon Musk and others being business titans now is an example of survivorship bias.
Rick probably hates this description. Which makes me love it even more.
You can make a genuine argument that our collective existence needs AI to help develop and accelerate the implementation of solutions to existential threats like climate change. I’m sympathetic to this argument.
To fact-check what I write, I use Perplexity since it can provide me the source it used for fact-checking and I can review the source to assess whether the AI hallucinated its fact-check or not.
A refreshingly honest and balanced approach to AI. I appreciate you taking the time to think through this and share so openly.
Such an honest, useful, and nuanced reflection on what we all wrestle with more or less, sooner or later.