- AI Second Act
- Posts
- How to Get 80% More from AI with Better Prompts
How to Get 80% More from AI with Better Prompts
🧭 THIS WEEK AT AI SECOND ACT
This is the second edition of the newsletter — and I’d love your help shaping it. You’ll find a quick feedback poll at the end, and I always welcome direct replies.
👉 Just hit "Reply" and let me know what you want more (or less) of 💬. My goal is to make this as valuable and practical as possible for professionals like you navigating the new AI era. 🚀
This week’s issue is all about prompting — the first skill that will bring you the highest return from AI. The fact is, better prompting, even just '80% great' will get you a lot of value from AI through great answers from the models.
🧠 Where do chatbots and prompting fit in the big picture?
AI – The umbrella: anything that mimics human intelligence
Machine Learning (ML) – Systems that learn from data
Deep Learning – Powerful ML using neural networks (for images, speech, text)
Generative AI – AI that creates stuff (text, images, audio)
🧠 LLMs – These are the powerful language models (like GPT-4, Claude) behind the scenes of chatbots. They’re trained to predict and generate content — and chatting with them through tools like ChatGPT or Claude is how we interact with them.
💬 Chat tools like ChatGPT by OpenAI, Claude by Anthropic, Gemini by Google, Perplexity, and Grok by xAI are user-friendly ways to interact with LLMs (in this edition, we're only talking about the web interfaces, not APIs).
Don't Care About Details? / Skip this bit :)
At its core, a large language model (LLM) is a mathematical algorithm trained on massive amounts of text using complex techniques from probability and linear algebra 🧠💥(my head exploding thinking of the maths here - Palmer, South Australia primary school did not teach this!...).
It doesn’t 'understand' meaning the way humans do. Instead, it predicts what word (or phrase) is likely to come next, based on everything it has seen before.
When you type a prompt, it looks at all the words and patterns in your question — and then chooses the most likely response based on its training.
🤖 Think of it like autocomplete, but trained on nearly the entire internet, books, code, and more. (Yes — there are open questions and lawsuits about copyright here! ⚖️).
💡 The quality of your prompt — shapes the quality of the output.
In this issue:
AI Chat tools comparison
Which tool should you use (and when)
The 80/20 of prompting
Prompt recipes for real-world tasks
🧰 AI NEWS + LEARNING
Anthropic Economic Index: AI’s Impact on Software Development
AI 2027 - (Crazy?) Research and forecast of AI growth - We (might) need the Terminator! 🤖
Kaggle Prompt Engineering Whitepaper -> Dive deeper into prompting
Perplexity Deep Research -> Basic description and examples of the power of deep research

🗺️ FEATURED INSIGHT
🧠 First: Which AI tool should you use?
Here’s an updated look at the top tools to use today:
Tool | Best For | Why Use It | Access |
---|---|---|---|
Claude (Anthropic) | Complex reasoning, document analysis, coding; safety-sensitive environments | Hybrid reasoning with "step-by-step" or fast responses; excels at structured thinking | Free + Paid |
ChatGPT (OpenAI) | Writing, brainstorming, planning; multi-session productivity | Fast, multimodal, great for general use and logic; supports cross-chat memory | Free + Paid |
Perplexity | Real-time research, facts | Web-connected, concise answers with cited sources— choose GPT-4, Claude, or Mistral inside | Free + Paid |
Gemini 2.5 (Google) | Google Docs/Slides integration, visual tasks | Native integration with Workspace apps and strong image/chart analysis | Free + Paid |
Grok (xAI) | Social and news-aware summarization; integrated workstreams | Integrated with the X platform; supports cross-thread memory and trending awareness | Free + Paid |
LLaMA (Meta) | Custom applications, open-source environments | Open weights; flexible deployment; great for developers who want control | Free |
Copilot (Microsoft) | Microsoft 365 productivity, enterprise integration | Deep Office integration, Teams/Outlook summarization, secure for enterprise workflows | Free + Paid |
✨ In my experience using most of these tools, here are a few standout features worth noting:
🔍 Deep Research – Perplexity leads here, offering fast, source-backed summaries. ChatGPT, Gemini, and Grok have added similar capabilities recently (in paid tiers). Expect more tools to follow.
🧠 Memory – Not just remembering a single conversation (which is a given), but carrying knowledge across chats and projects. In ChatGPT and Grok, this is especially powerful — the model can learn what you're working on and give better, more personalized answers over time.
🌐 Web Search – AI models are trained on massive amounts of past data, which means their knowledge can quickly become outdated. Web search adds live information — helping tools like Perplexity, Grok, or ChatGPT (paid tiers) include recent facts, updates, and events in their answers.
📄 Document Upload – Upload documents so the model can read and reference them. Great for fast summaries, extracting insights, or answering questions based on the content.
🧾 Canvas – A shared space where you and your AI assistant can co-write, edit, or develop ideas, documents, or code together in real-time. Great in ChatGPT and Claude.
💬 Which AI Tool Should You Use?
These are the top insights that tie everything together. If you remember one thing from this edition — it’s this section 👇
Use ChatGPT for day-to-day work.
Switch to Claude when you need thoughtful analysis or are working with long, complex documents. Or just want a different answer.
Use Perplexity for fast or deep research, also live from the web.
Those would be my top 3 go-to but you may want to get into Google Gemini or Microsoft Copilot depending on the work environment you have.
🔄 80/20 of Prompting: Small Tweaks, Big Results
Here’s how to get 80% better results with just 20% more effort in your prompts:
🎭 Give it a role — e.g. “You are a project manager…” → Helps the model respond with the right tone and expertise
📋 Add structure — e.g. “Give me 3 bullet points + a short summary.” → Makes responses easier to use
📌 Provide context — e.g. “We’re preparing for a leadership offsite…” → Grounds the AI in your specific task
🚫 Say what to avoid — e.g. “No fluff. Keep it under 150 words.” → Saves editing time
🔁 Use follow-ups — Don’t restart. Say: “Can you rephrase this for an exec audience?” → Builds better output step by step. Iterate, iterate, iterate.
I would just add that with the current state of technology, you need to iterate until you get the answer you want. It is still quicker though than without AI. You will also likely find yourself trying different tools to get the answer or result you want.
💼 Real Work, Real Prompts: Common Mid-Career Use Cases
Work Task | Best Tool | Prompt Example |
Drafting a client-facing project update | ChatGPT | “You are a senior project manager. Write a clear, confident status update summarizing 3 key risks, milestones, and actions for an executive client.” |
Summarizing a 20-page technical spec or RFP | Claude | “Read this document and give me a summary of goals, constraints, and key deliverables in bullet points.” |
Researching competitive features or pricing | Perplexity | “Compare the pricing and integrations of Jira vs Monday vs ClickUp for large teams. Include sources.” |
Planning Q3 team goals or roadmap | ChatGPT | “Act as a senior program director. Help me outline 3 Q3 goals for a cross-functional engineering team, with owners, risks, and success metrics.” |
Reviewing product feedback across emails/slack | Claude | “Summarize key product pain points mentioned in this pasted feedback log. Categorize them by theme.” |
Writing a performance review draft | ChatGPT | “Help me write a performance review for a mid-level engineer. Highlight strengths in ownership, cross-team collaboration, and initiative.” |
Creating an internal tool evaluation report | Claude | “Based on the documents I paste, write a brief comparing three AI note-taking tools for engineering leaders.” |
These tools aren’t magic — but they’re excellent thought partners, editors, and accelerators. Start small, iterate, and keep improving how you ask for the result you want.