Hey, this is my Sunday experiment based on what I wanted to learn. I want to explore how different models think, reason, and generate — especially when it comes to structured research and strategy. Experimentation is super key for any AI founder

I would love to hear about your side-project Experimentation as we a founder.

👉 Connect with me on LinkedIn or email me at [email protected].

Let’s share what we discover through these experiments.

⚙️ How It Works This is a 6-step prompt sequence you can run across multiple LLMs (ChatGPT, Claude, Perplexity, etc.) to see how each model performs on the same structured analysis.

💡 Run each prompt one after another, not at the same time.

Each builds logically on the previous output.

The Prompts can be done a lot better, was just a 2h exp.

🧩 Step 1 — Market Landscape

Description:

Kicks off the research. It maps the overall category and identifies the top tools or players shaping it. This step helps you “see the forest before the trees” — who’s building what, how they’re positioned, and where the whitespace might be.

What to add:

Prompt

You are a research strategist analyzing the landscape of AI [replace with your industry] tools.

Your goals:
1. Generate 20 distinct search queries that real buyers or researchers might use to find products in this category.  
2. For each query, list 3–5 relevant tools with a one-line summary and a working URL.  
3. Combine and deduplicate all results into a single **Top 20 list** with names, summaries, and URLs.  
4. Summarize the top market trends, dominant models, and whitespace opportunities.

Output format:
- List of 20 search queries  
- Top 20 tools (name, one-liner, URL)  
- 3–5 trend or whitespace insights