Companion site for the Practical AI talk

Practical AI

A plain-language guide to what AI is, when to use it, and how to get great results.

01

What is AI?

AI is a technology that learns patterns from massive amounts of data and uses those patterns to do new things — answer questions, generate writing, summarize documents, write code, create images, and a lot more.

Two Main Types of AI

Predictive AI

Predicts outcomes from patterns in data. Powers fraud detection, recommendation engines, demand forecasting, medical diagnostics. It tells you what is likely to happen.

Generative AI

Creates new content — text, images, code, audio, video. Powers ChatGPT, Claude, Midjourney, and the tools this site is mostly about. It produces something that did not exist a moment ago.

How These Models Learn

Modern AI models are trained on billions of examples — books, articles, websites, and conversations. They learn patterns: grammar, facts, reasoning, code structure, what good writing sounds like. Once trained, they don't look anything up. They predict, token by token, what should come next given everything they've seen.

The AI Tool Landscape

General-Purpose Tools

  • ChatGPT — OpenAI
  • Claude — Anthropic
  • Gemini — Google
  • Copilot — Microsoft
  • Grok — xAI
  • Perplexity — Perplexity AI

Specialized Tools

  • Midjourney — Image generation
  • Flux — Image generation
  • Fireflies — Meeting notes
  • Gamma — Slide decks
  • Grammarly — Writing assistance
  • Jasper — Marketing copy

Different Modes

Fast Mode

Instant answers. Great for everyday questions, quick rewrites, brainstorming, and routine tasks. Pick this 90% of the time.

Thinking Mode

The model deliberates before answering. Slower, but much better at complex reasoning, multi-step problems, math, and code. Reach for it when the stakes are higher.

Tools & Features

Tools and Features overview slide from the Practical AI talk.

Today's AI assistants are no longer just text-in / text-out. These are the features that matter most — what each one is, when to reach for it, and where to find it.

01

Thinking Mode

A setting that tells the model to deliberate step-by-step before answering instead of responding instantly. Trades speed for accuracy on hard problems.

Example: Ask Claude to plan a five-day Tokyo trip with three kids, dietary restrictions, and a $3,000 cap. Fast Mode gives you a generic itinerary; Thinking Mode actually reconciles the constraints and flags conflicts.

Available in: Claude (Extended Thinking) · ChatGPT (o-series) · Gemini (Deep Think) · Grok (Think)

02

Web Search

Lets the AI look things up live on the open web instead of relying only on its training data. The model decides when to search; you can usually force it on or off.

Example: Ask "what changed in the SEC's climate disclosure rule this month?" Without web search the model is stuck at its training cutoff and may guess. With it, you get a current answer with links.

Available in: Every major chat tool — ChatGPT · Claude · Gemini · Copilot · Grok · Perplexity (built around it)

03

Deep Research

An autonomous research mode where the AI plans a multi-step investigation, browses dozens of sources, and produces a cited written report. Takes 5–30 minutes; runs while you do other things.

Example: "Build me a competitive landscape of US-based vertical SaaS companies with under 50 employees serving the construction industry. Include funding, headcount, and a one-line take on each." You get a footnoted report you can hand to a stakeholder.

Available in: ChatGPT (Deep Research) · Gemini (Deep Research) · Perplexity (Deep Research) · Grok (DeepSearch)

04

Memory

The model remembers things you tell it across separate conversations — your name, role, preferences, ongoing projects — without you having to repeat yourself every time. You can review and edit what it remembers.

Example: Tell ChatGPT once that you write in plain language, prefer bullet lists, and work in higher ed. From then on, every new chat opens with that context already loaded — no preamble required.

Available in: ChatGPT (Memory) · Claude (Project memory) · Gemini (Memory) · Grok

05

Personalization

Set persistent instructions that shape how the AI talks to you — tone, length, default formatting, what to assume about your work — applied to every new conversation.

Example: Tell Claude once: "I'm a marketing director. Default to 200-word responses, skip the disclaimers, and challenge weak ideas instead of just agreeing." Every chat starts in that mode.

Available in: ChatGPT (Custom Instructions) · Claude (System Prompts / Styles) · Gemini (Saved Info)

06

Document Analysis

Drop a PDF, Word doc, spreadsheet, slide deck, or audio file into the chat. The model reads it as part of your conversation and can summarize, compare, redline, pull data, or answer questions about it.

Example: Upload a 60-page contract and ask: "summarize the obligations on us, list anything unusual compared to a standard MSA, and draft a redline I can send back."

Available in: ChatGPT · Claude · Gemini · Copilot · Grok

07

Image Analysis

Upload a photo, screenshot, chart, or whiteboard scribble and the model can read what's in it — text, diagrams, objects, handwriting — and answer questions about it.

Example: Photograph a confusing nutrition label and ask "is this OK for someone on a low-FODMAP diet, and what are the worst ingredients?" Or paste a screenshot of an error message and ask what's wrong and how to fix it.

Available in: Every major chat tool now sees images — ChatGPT · Claude · Gemini · Copilot · Grok

08

Image Generation

Create images from a text description. Modern versions also edit images you provide — change the background, swap the outfit, restyle, remove a person, or extend the canvas.

Example: Upload a phone photo of your living room and prompt "make it look like a Pacific Northwest cabin — wood walls, warm light, fewer cables." You get a preview good enough to share with a designer.

Available in: ChatGPT (GPT-Image) · Gemini (Imagen / Nano Banana) · Midjourney · Flux · Adobe Firefly

09

Voice Mode

Talk to the model and have it talk back, in real time, with a natural-sounding voice. Most implementations now hear tone of voice and can be interrupted mid-sentence.

Example: Run a 20-minute mock interview while you walk the dog. The model plays a tough hiring manager, asks follow-ups, then debriefs you on what to tighten. No typing.

Available in: ChatGPT (Advanced Voice) · Gemini Live · Claude (voice on mobile) · Grok (Voice Mode)

10

Video Mode

Two flavors. (1) The model can see your camera or screen in real time and react to what's in front of it. (2) The model can generate full video clips from a text prompt.

Example: Point your phone at a broken sprinkler valve and ask "what is this part and where do I get a replacement?" — the model identifies it on camera. Or generate a 10-second cinematic shot of a kayak gliding through fog with Veo or Sora.

Available in: Live video: ChatGPT · Gemini Live · Generation: Sora (OpenAI) · Veo (Google) · Runway · Kling

11

Coding

The model can write code in any common language, explain it, debug it, and in many tools actually run it in a sandbox to verify the result. Goes well beyond pasting a snippet.

Example: Upload a messy CSV of sales data and ask "what were the top 5 products by margin in Q3, and chart them by week?" The model writes Python, runs it, and returns the numbers + a chart — no spreadsheet acrobatics.

Available in: ChatGPT (Code Interpreter) · Claude (Analysis Tool, Claude Code) · Gemini · Cursor · Windsurf · GitHub Copilot

12

Computer Use

Give the model control of a virtual desktop — it can move the mouse, click, type, open apps, and operate software the same way you would. The lower-level cousin of Agent Mode.

Example: Hand Claude a spreadsheet, ask it to open a browser-based BI tool, build a specific report, and save the result back to your folder. It drives the apps directly instead of telling you what to click.

Available in: Claude (Computer Use) · ChatGPT (Operator) · Gemini (Project Mariner)

13

Agent Mode

The AI doesn't just write the answer — it does the work. It plans the steps, opens a browser, fills forms, runs code, and returns a finished result. You set the goal; it figures out the path.

Example: "Find me three flights from SEA to BOS next Friday under $400, in a comparison table, then add the best one to my calendar." The agent opens kayak.com, runs the searches, and (with permission) writes the calendar event.

Available in: ChatGPT (Agent Mode) · Claude (Claude Code, Computer Use) · Gemini (Project Mariner)

14

Connectors & MCP

Plug the AI directly into your tools — Gmail, Drive, Notion, Slack, GitHub, your database — so it can read and act in those systems with your permission. MCP (Model Context Protocol) is the open standard that powers most of this.

Example: Connect Claude to your Notion workspace and Google Calendar, then ask: "look at my project plan in Notion, find this week's deadlines, and tell me which meetings on my calendar should be moved." It reads both directly — no copy-paste.

Available in: Claude (Connectors / MCP) · ChatGPT (Connectors) · Gemini (Workspace integrations) · MCP spec

15

Skills

Reusable instruction packs you install once and then invoke by name — like add-ons that teach the model a specific workflow, format, or set of guardrails.

Example: Install a "Brand Voice" skill that knows your style guide, tone, and forbidden words. From then on you can say "draft this announcement using brand-voice" and it produces on-brand copy without re-explaining the rules each time.

Available in: Claude (Skills) · ChatGPT (GPTs play a similar role)

16

Projects

A persistent workspace that bundles a set of files, custom instructions, and chat history around a single piece of work. Every conversation inside the project shares that context.

Example: Create a "Q4 Board Deck" project, drop in last quarter's deck, the financial model, and a rubric for what good slides look like. Any chat in that project already knows your goals, history, and constraints.

Available in: Claude (Projects) · ChatGPT (Projects) · Gemini (Gems) · Notion AI

02

When Should I Use AI?

AI works best on routine professional tasks that require skill but follow predictable patterns — and on personal tasks where it can save you time, sharpen your thinking, or handle the parts you find tedious.

It's Not Google

Most people treat AI like a search engine — that's like using a race car to deliver pizza. You'll get there, but you're missing the point. Google answers your query. AI helps you do the work.

Use Google when

  • You need one specific fact
  • You want the most recent information
  • You need to verify something is true

Use AI when

  • You want a fact and the explanation around it
  • You have a series of related questions
  • You need help thinking through a complex topic
  • You want to explore ideas, not just retrieve them

Searching looks like this:

best vacation spots with kids

Instructing looks like this:

Act like a travel agent. Plan a 5-day family trip in August with kids under 10, $2K budget, flying from Seattle. Include rainy-day activities.

You're not searching. You're instructing.

Real Work Examples

  • Drafting emails, reports, and proposals
  • Summarizing long documents and meeting notes
  • Brainstorming names, taglines, and concepts
  • Translating text between languages
  • Cleaning up data or generating boilerplate code
  • Preparing for interviews, presentations, and tough conversations

Personal Examples

  • Planning trips and creating itineraries
  • Cooking — recipes, substitutions, meal plans
  • Explaining anything in plain language
  • Helping with homework or learning a new topic
  • Writing thoughtful messages — birthday notes, condolences, apologies
  • Sorting through a decision out loud

Strengths vs. Weaknesses

AI is great at

  • Brainstorming ideas
  • Drafting written content
  • Summarizing long documents
  • Reformatting and rewriting
  • Explaining difficult concepts
  • Translating between languages
  • Generating starter code

AI struggles with

  • Up-to-the-minute facts (without web search)
  • Math and precise calculations (without tools)
  • Anything where being wrong has real consequences
  • Tasks needing genuine human judgement or empathy
  • Knowing your private context unless you provide it

When Not to Use AI

Five times to put AI down — beyond the obvious (illegal use, anything where being wrong is catastrophic):

01

When you need to learn and synthesize

Asking for a summary isn't the same as reading. AI shortcuts the thinking that makes the learning stick.

02

When accuracy has to be near-perfect

Hallucinations are confident and plausible — the kind of error you stop catching once you trust the output.

03

When you don't yet know how AI fails

AI doesn't fail like humans. It will agree with wrong answers, double down, or fabricate a convincing source. Get hands-on with the failure modes before you trust important work to it.

04

When the effort is the point

Writers rewrite, athletes practice, students struggle. If working through the problem is what produces the insight, AI removes the insight.

05

When AI is genuinely bad at the task

Counting letters, rigorous arithmetic without tools, anything where it pattern-matches when it should compute. There's no manual; trial and error is the only way to learn the edges.

Check Your Company Policy First

Before pasting any work content into a public AI tool, check your company's AI policy. A few common rules:

  • Don't paste customer data, PII, or financial information into consumer chat tools.
  • Use the company-approved AI tool when one exists — it's often the same model with the right privacy controls.
  • Don't upload anything you wouldn't email to a stranger.
  • If in doubt, redact, paraphrase, or ask IT.

03

How to Use AI

The single biggest factor in getting a great answer from AI is the quality of what you send it. A simple framework — and the willingness to keep talking to it — does most of the work.

The RTCF Prompting Framework

RTCF prompting framework template — Role: You are a [specific expert with clear expertise]. Task: Your goal is to [verb-forward instruction]. Context: Here's what you need to know [detailed background]. Format: Structure your response as [specific format with examples].

Before vs. After: A Simple Example

Lazy prompt

Give me a good chicken recipe for dinner.

RTCF prompt

You are a weeknight home-cook coach. Give me a chicken recipe I can make for two people in under 35 minutes using ingredients I likely have in a basic pantry. Skip anything that needs to marinate. Format as: title, ingredients (with substitutions), 5–7 numbered steps, and a "what to serve with it" line.

It's a Conversation

Don't accept the first response. Refine. Redirect. Repeat. That's how you get great work — same as working with a junior teammate who's brilliant but needs guidance. "Make it shorter." "More skeptical." "Try it from the other person's point of view." Each exchange compounds.

You Still Matter

AI is a force multiplier, not a replacement. You bring the taste, the judgement, the relationships, the accountability. Humans are still an essential element of every AI workflow — and the people who do best with these tools are the ones who stay in the loop, not the ones who try to outsource themselves.

A Sample Workflow

Five steps that turn a one-shot prompt into a real working session. The middle three are where the actual quality lives — most people skip straight from Use AI? to Polish, and that's why their results disappoint.

Sample workflow: 01 Use AI? — Determine if it's a good use case. 02 Framework — Create a conversation foundation. 03 Refine — Use follow-up requests to refine. 04 Review — You are the taste maker. 05 Polish — Small manual changes if needed.

04

Prompting Tips

Beyond RTCF, a small set of techniques will make almost every prompt better. Each tip below comes with a copy-pasteable starter you can drop into any AI tool today.

01

Ask Me Questions

Stop guessing what info the AI needs. Let it interview you.

Before you answer, ask me one question at a time until you have enough context to give me a great response. Then answer.
02

Help Me Create a Prompt

Use AI to write better prompts for AI.

I want to create [a meeting agenda for my team's quarterly review]. Help me create a prompt that will produce a great result. Ask me clarifying questions first if you need them.
03

Match My Tone

AI defaults to a generic voice. Anchor it to yours.

Here's a sample of how I write. Match this tone and style in the response below:

[paste 1–2 paragraphs of your writing]

Now respond to: [your actual ask]
04

Favorite Commands

A few one-liners that punch way above their weight.

Rewrite this paragraph using a friendly tone of voice.
Elaborate on the third bullet.
Make it half as long without losing the substance.
What's the strongest objection to this argument?
Translate this for a non-technical audience.
05

Upload Files

Most modern AI tools accept PDFs, docs, spreadsheets, and images. Use them.

Upload a long PDF and ask: "Summarize the key arguments in 5 bullets, then list the 3 weakest claims and why."
06

Show, Don't Just Tell

One concrete example beats five sentences of instructions.

Here's an example of the kind of output I want:

[paste a great example]

Now produce something similar for: [your input]

05

Beyond Prompting

Once you can prompt confidently, the next leap isn't a better prompt — it's changing how you work with AI. Here's where things are going.

01

Context Is the New Prompt

AI cannot do a good job unless it understands the situation. The most leveraged thing you can do isn't writing cleverer prompts — it's feeding the model the right context: the document, the brief, the prior conversation, the data, the constraints, the audience.

Diagram contrasting too-little context (generic reply) with enough context (relevant, on-target reply).
02

From Copy-Paste to Connected

The old workflow was: copy from a doc → paste into AI → copy the answer → paste it back. The new workflow is: connect AI directly to your approved sources and tools so it can read and write where the work actually lives. Less friction, fewer errors, and the AI sees the full picture.

Side-by-side: a person manually gathering and pasting context vs. AI pulling context directly from connected sources.
03

Cross-Model Critique

Use one model to challenge another. Take the output of model A, paste it into model B, and ask "what's wrong with this?" Different models have different blind spots; making them argue surfaces problems neither would catch alone.

Loop diagram: Model A drafts, Model B critiques, Model A revises, repeat — yielding a stronger output.
04

AI on Your Desktop

AI is no longer just a browser tab. Tools like Claude for Desktop and ChatGPT's Mac app can read what you're working on, drive your applications, and stay in the flow with you. The barrier to "let me ask the AI" keeps dropping.

Project workspace folder structure: 00_Inbox, 01_Source/Context, 02_For Review, 03_Final Output.
05

Agents

Until now: you ask, it answers, you copy-paste, you execute the plan. With agents: you ask, and it executes the plan. It books the flight, files the ticket, sends the message, runs the code. This is the next major shift, and it's already starting.

Example agent workflow planning a Santa Barbara trip — searching, comparing flights and hotels, and adding to calendar.

06

Prompt Library

Copy-paste prompts you can use today. Each one starts with role and task, gives the model the context it needs, and tells it the format you want back — the RTCF framework in action. Replace the [bracketed bits] with your own details.

Writing

Email reply that sounds like you
You are an editor who writes in a clear, warm, no-fluff voice — short sentences, contractions OK, no corporate hedging. Below is an email I received and a rough draft of my reply. Rewrite my reply in that voice. Keep my actual points; tighten the wording, cut throat-clearing, and end with a clear next step. Return only the rewritten email.

EMAIL I RECEIVED:
[paste]

MY DRAFT REPLY:
[paste]
Tighten this paragraph
You are a ruthless line editor. Cut filler, replace weak verbs, remove throat-clearing, and shorten wherever possible without losing meaning. Keep the original tone. Return: (1) the tightened version, (2) a short bullet list of what you cut and why.

PARAGRAPH:
[paste]
Translate into plain language
Rewrite the text below for a smart but non-expert reader. No jargon — if a technical term is essential, define it inline. Aim for short sentences and concrete examples. Preserve every fact; do not add new claims. Return only the rewritten version.

TEXT:
[paste]

Meetings

Pre-meeting brief
You are my chief of staff. I have a meeting with [name/role] in [time] about [topic]. Based on the notes below (and any recent context you have), give me:
1. A one-line goal for the meeting
2. The 3 questions I should ask
3. The 2 most likely points of pushback and how I'd handle each
4. One thing I should NOT bring up

NOTES / CONTEXT:
[paste]
Summarize meeting notes into actions
Read the meeting notes below. Produce two outputs:
1. A 5-bullet executive summary
2. An action list with columns: owner, action, due date (use "TBD" if not mentioned)

Don't invent owners or deadlines. If the notes are ambiguous, flag it instead of guessing.

NOTES:
[paste]
Draft a follow-up email
Below are my notes from a meeting with [name]. Draft a follow-up email from me to them that: thanks them, summarizes what was decided in 3 short bullets, names the next concrete step with an owner and date, and ends with a clear question. Match my voice — direct, friendly, no jargon. Keep it under 150 words.

NOTES:
[paste]

Strategy & analysis

Steelman the other side
I'm leaning toward [decision/position]. Steelman the strongest case AGAINST it — not strawman objections, the most rigorous version a smart critic would actually make. Give me 4–6 points, each with a one-sentence rebuttal I'd need to have ready. End with: "If you only address one of these, address [X]" and explain why.

CONTEXT:
[paste]
Pre-mortem on a decision
It's six months from now and the decision below failed. Walk me through the most likely failure mode: what specifically went wrong, what early signals would have warned us, and what we should change about the plan now to avoid it. Be concrete; no generic risk-register filler.

DECISION:
[paste]
Compare options on the criteria that matter
I'm choosing between [option A], [option B], and [option C] for [purpose]. The criteria I actually care about are [list]. Build a comparison table: rows = criteria, columns = options. In each cell, give a short verdict (1–2 sentences) — not a score. End with a one-paragraph recommendation that names tradeoffs explicitly.

CONTEXT:
[paste]

Documents

Contract risk scan
You are a careful contract reviewer (not a lawyer). Read the contract attached. Output:
1. A 5-bullet plain-language summary of what we're agreeing to
2. The 3 clauses most likely to bite us, with the exact quoted text and why it matters
3. Anything that looks unusual compared to a standard [agreement type]
4. A list of questions to ask before signing

I am the [your role/company]. Counterparty is [their role/company].
Compare two documents
Compare the two documents attached. Produce: (1) a one-paragraph summary of the key differences, (2) a side-by-side table of every meaningful change, (3) a flag list of any change a reasonable reader might miss. Quote exact text in the comparison cells.
Summarize a long PDF for an exec
Summarize the attached document for a busy executive who has 60 seconds. Format:
- One-sentence TL;DR
- 3 bullets of the most important findings
- 2 bullets on what the exec needs to decide or do
- One quote (with page number) that captures the document's heart

No fluff. No "in conclusion." Cut anything that doesn't pay rent.

Slides & presentations

Outline a deck from a brief
Outline a [length]-minute presentation for [audience] on [topic]. Output a slide-by-slide list with a title, a one-sentence purpose, and 3–5 bullet points of content per slide. Keep cognitive load light — each slide should make one point. Open strong (a question or surprise), not with an agenda slide.

BRIEF:
[paste]
Tighten my slide titles
Below are slide titles from my deck. Rewrite each as a short, declarative sentence that states the slide's single takeaway — not its topic. ("Q3 Revenue" → "Q3 revenue grew 22%, driven by enterprise.") Keep my voice; cut hedges. Return as a numbered list matching my input order.

TITLES:
[paste]

Coding & data

Explain this code
Below is a piece of code. Explain it to a competent engineer who didn't write it: what it does at a high level, the key control flow, anything subtle or non-obvious, and any bugs or smells you notice. Use specific line references. Don't paraphrase the code; explain it.

CODE:
[paste]
Pull insights from a messy CSV
Attached is a CSV. The columns are [list], roughly representing [domain]. I want to know:
- [question 1]
- [question 2]
- Anything else interesting you notice

Do the analysis in code (you can run Python). Output: the answer to each question with a number and a one-sentence interpretation, plus one chart for any finding that deserves a picture. If the data is too messy to answer something, say so — don't fudge.

07

Glossary

The vocabulary you'll hit when you read about AI in the wild — terms used in product release notes, tech press, and water-cooler conversations. Skim it once; come back when something doesn't make sense.

LLM (Large Language Model)
The neural network behind ChatGPT, Claude, Gemini, and similar tools. Trained on huge amounts of text to predict the next word, which — at scale — turns into reading, writing, reasoning, and coding ability.
Token
The unit of text a model reads and writes — roughly a chunk of three or four characters or about three-quarters of a word. "AI" is one token; "antidisestablishment" is several. Pricing and context limits are usually measured in tokens.
Context Window
The maximum amount of text the model can hold in mind at once — the prompt plus the response plus any attachments. Once you exceed it, earliest content gets dropped. Modern windows range from ~32k to 1M+ tokens.
Prompt
Whatever you send to the model — a question, an instruction, a document, an image. The quality of your prompt is the single biggest factor in the quality of the response.
System Prompt
A persistent instruction the tool applies before every message in a conversation, setting role, tone, or rules. Custom Instructions, Claude Styles, and Gemini Saved Info are flavors of this.
Hallucination
When the model confidently states something false. Names, citations, statistics, and quotes are the most common offenders. The fix isn't trust — it's verification, web search, or grounding the answer in a document you provided.
Grounding
Tying the model's answer to a specific source — a document you uploaded, a search result, a database. Grounded answers cite back to the source, dramatically reducing hallucinations.
RAG (Retrieval-Augmented Generation)
A pattern where the system first searches a knowledge base for relevant chunks, then hands those to the model along with your question. How most "chat with your documents" features work under the hood.
Fine-tuning
Further training of a model on a custom dataset to specialize it. Less common than it used to be — for most tasks, prompting and RAG do the job without the cost of fine-tuning.
Inference
A single run of the model — your prompt in, the model's answer out. "Inference cost" is what you pay per call; "inference latency" is how long the model takes to respond.
Reasoning / Thinking Model
A model that visibly works through a problem step-by-step before answering, trading time for accuracy on complex tasks. Claude Extended Thinking, ChatGPT o-series, Gemini Deep Think, Grok Think.
Agent
A model setup that doesn't just respond — it plans, takes actions, observes results, and iterates. Think "the AI books the flight" rather than "the AI tells you which flight to book."
Tool Use / Function Calling
The model's ability to call external tools (a search engine, a calculator, your calendar API) instead of guessing. The mechanism that turns a chat model into an agent.
MCP (Model Context Protocol)
An open standard for connecting AI assistants to outside data sources and tools — Gmail, Notion, Slack, GitHub, your database. Anthropic introduced it; most major vendors now support it.
Multimodal
A model that handles more than just text — images, audio, video, code. Modern flagships (GPT-5, Claude 4, Gemini 2.5) are all multimodal in and out.
Embedding
A numerical representation of a chunk of text that captures its meaning. Two pieces of text with similar meaning have similar embeddings. The math behind semantic search and most RAG systems.
Temperature
A knob that controls how predictable the model's output is. Low temperature = same answer every time, sticks to safe choices. Higher temperature = more variety, more creativity, more risk of weirdness.
Knowledge Cutoff
The date the model's training data ends. Without web search, the model genuinely doesn't know about anything that happened after that date — and may confidently make things up if you ask.
Prompt Injection
A class of attacks where hostile instructions are hidden in content the model reads (a webpage, an email, a PDF) and tries to override what you actually asked. Why agents and connectors are designed cautiously.
Open Weights vs. Closed Weights
Open-weights models (Llama, DeepSeek, Mistral) have their parameters published — you can run them yourself. Closed-weights models (GPT, Claude, Gemini) are accessed only through the vendor's API or product.
Vibe-coding
Building software primarily by describing what you want in natural language to an AI coding tool, accepting and iterating on what it produces, rather than writing each line yourself. Cursor, Claude Code, GitHub Copilot are common stacks.

FAQ

The questions I hear most after every talk — especially from teams trying to figure out how to use this stuff at work without breaking anything.

Is it safe to use AI with confidential or proprietary information?

Check your company's AI policy first — most have one now. As a default: don't paste customer data, financial details, contracts, or anything you wouldn't email outside the company into a free consumer chatbot. Enterprise plans (ChatGPT Team/Enterprise, Claude for Work, Microsoft Copilot for M365, Gemini for Workspace) keep your data private and out of training. If your employer offers one of those, use it.

There are so many tools — which one should we pick?

For 90% of work tasks, the major chat tools are interchangeable. ChatGPT, Claude, Gemini, and Copilot are all good. Pick the one your employer already pays for, or whichever feels most natural to talk to. The bigger lever isn't tool choice — it's how skillfully you use the one you have.

How do I handle hallucinations? Doesn't the AI make stuff up?

Yes, sometimes confidently. The fix is twofold: (1) turn on Web Search when facts matter — the model cites sources you can click through. (2) Treat AI output the way you'd treat a draft from a brilliant junior — verify anything that has consequences if it's wrong. Names, dates, statistics, and citations are the most common offenders.

Is the free version good enough, or should we pay?

Free tiers are surprisingly capable for everyday use. You hit limits when you (a) need higher usage volume, (b) want the latest/strongest models, (c) need bigger context windows for long documents, or (d) need privacy controls for work data. For an individual evaluating: start free for two weeks. If you find yourself hitting walls, $20/month for the paid tier is almost always worth it.

Will my conversations be used to train the AI?

Depends on the tool and the tier. Consumer free/personal accounts often default to using your chats for training (you can usually turn it off in settings). Business and enterprise tiers (ChatGPT Team/Enterprise, Claude for Work, Copilot for M365) don't train on your data by default. If privacy matters, check the data settings on whichever tool you're using and disable training opt-in.

I'm not technical — can I actually use these tools well?

Yes. Being good at AI is mostly about being good at communicating what you want — which is a writing and thinking skill, not a coding skill. The people who get the most out of AI tend to be the people who would have been great at delegating to a smart assistant. If you can write a clear email, you can prompt well.

How do I get my team started without it being chaos?

Start with one approved tool, one clear policy on what data is OK to use it for, and a small set of high-leverage use cases (drafting, summarizing, brainstorming). Let early adopters share what's working in a shared channel. Don't try to mandate it top-down or ban it bottom-up — both fail. If you want help structuring this, that's a lot of what I do — see the section above.

About Justin

Justin Nikolaus

A decade of building AI products at Microsoft and Amazon. Now helping organizations put AI to work.

I've spent more than ten years at the intersection of AI and design — shaping how people experience intelligent technology. At Microsoft, I helped embed intelligent features into Windows. At Amazon, I shaped Alexa experiences, including the voice interfaces NASA used for astronauts on lunar missions.

Today I work with companies and event audiences on the practical side — what AI is actually good at, where it falls down, and how to bring it into the work without the hype or the hand-wringing.

practicalaiservices.com →

Previous Work

  • Amazon
  • NASA
  • Microsoft
  • Target