A practical guide to the design thinking tools, software, and platforms used by teams in 2026 — categorized by job, with honest tradeoffs.
There is no single "design thinking tool." There is a stack of tools, each doing a specific job inside a five-stage process. The teams that get the most out of design thinking treat their tooling the way a carpenter treats a workshop: a small number of well-chosen instruments, each used for the work it was built for, none expected to do everything.
This guide is a practical map of that workshop. It groups the tools and software design teams actually reach for in 2026 by the job they do — not by vendor marketing categories — and it ends with a decision framework for choosing what belongs in your own kit. Where a category has converged on two or three category leaders, those names appear. Where the field is still fragmented, that is named honestly. Nothing here is a sponsored placement and no tool is included that the author has not seen used in real projects.
Before listing software, it helps to be clear about what tools are actually being asked to do. A complete design thinking toolkit operates at three layers, and confusing them is the most common reason teams over-buy software they never adopt.
Most teams need all three layers. They need the framework to know what they are doing, the job-specific software to do it well, and either a full-process platform or a disciplined manual practice to keep the project's context coherent across stages. Skipping any layer creates a recognizable failure mode: skipping frameworks produces busy work, skipping job-specific software produces messy artifacts, skipping process integration produces six disconnected stage outputs that never become a coherent design.
These are the well-known scaffolds, and the good news is that all of them are free and well-documented. The bad news is that many teams treat them as the work itself rather than as containers for the work. Filling in an empathy map without conducting empathy interviews produces a useless artifact, no matter how neat the columns look.
The frameworks worth knowing by name, organized by stage:
Two reference libraries are worth bookmarking and using as the canonical source for the frameworks themselves: Stanford d.school's Resources and IDEO's Design Kit. Both are public, both are maintained, and both predate every piece of software listed below. If a team is just starting out, they should download a few canvases from one of those libraries and try the process on paper before evaluating any software at all. A team that cannot run design thinking on a whiteboard will not run it better in a SaaS app; they will just produce more expensive confusion.
Below is the working stack — categorized by the job each tool does best. The point of organizing the list this way is to make substitutions explicit: any tool listed in a category can usually replace any other tool in that category without changing how the rest of the stack functions. This is the test of a healthy toolkit. If swapping one tool requires re-engineering the others, the team is too entangled with a single vendor.
The research stage produces transcripts, observation notes, photos, screen recordings, and survey data. The job is to store all of it in a way that makes patterns surface across multiple sessions. The category leaders are Dovetail and Condens for tagged research repositories, and Notion or Confluence for teams that prefer a general-purpose workspace over specialized research software. Spreadsheets work too — many of the best research synthesis projects in the field have been done in a single shared sheet.
The job-specific software earns its keep when the team is running more than ten interviews per project and needs to find the third quote about the same frustration without re-reading every transcript. Below that volume, a folder of well-named documents and a discipline of tagging-as-you-go is enough.
This is the most crowded category and the one most teams over-invest in. The work being done is sticky-note clustering, journey mapping, How Might We question generation, and dot voting. Three tools dominate the category in 2026: Miro, FigJam (Figma's whiteboard product), and Mural. Functionally they are interchangeable for the design-thinking workloads listed here. The choice usually comes down to what the rest of the company already uses — picking the one your stakeholders are already logged into is worth more than any feature comparison.
Two honest cautions about this category. First, the appeal of an infinite canvas is also its danger: a workshop output can look impressively dense without containing a single clear decision. The discipline that turns a Miro board into a useful artifact is the same discipline a facilitator would bring to a physical wall — explicit time-boxing, an agreed convergence ritual, and a written summary at the end. Second, an asynchronous whiteboard is a poor substitute for a synchronous workshop with people in the same room or on the same call. The tool can host the artifact; it cannot replace the conversation.
The job here is structuring the agenda, the time-boxing, and the participant flow of a multi-hour or multi-day design thinking workshop. SessionLab is the category leader for facilitators who run workshops as a regular practice; it provides agenda templates, a method library, and timing tools. For one-off workshops, a shared document with a clear timeline, a slide deck for transitions, and a kitchen timer covers the same ground.
Workshop facilitation is the area where the ratio of frameworks-to-software is highest. The frameworks (timeboxing, divergence-then-convergence, the "I like / I wish / What if" feedback structure, the silent solo writing rounds before group discussion) do the heavy lifting. Software helps a regular facilitator stay organized; it does not substitute for facilitation skill.
Prototyping splits cleanly by fidelity. For low-fidelity sketching and storyboarding, paper and a smartphone camera remain unbeaten — fast, throwaway, and free of the temptation to polish. For mid-fidelity wireframes and clickable mocks, Figma is the de facto standard for digital products in 2026, with Sketch retaining a smaller user base. For coded prototypes that need to behave like a real product (touching live data, testing performance), the team is into engineering territory and the tooling question becomes a frontend stack question, not a design tool question.
For non-digital prototyping (a service, a physical product, a workshop format), the best tool is a willing collaborator and a prepared script. There is no software that turns a service prototype into something useful; the prototype is the role-play.
Two distinct jobs hide under one heading. The first is moderated testing — sitting with a participant, watching them attempt tasks, asking follow-up questions in real time. The required tools are a video conferencing app (Zoom, Google Meet, anything that records), a way to share the prototype, and a notes template. No specialized software needed.
The second is unmoderated testing — recruiting participants who complete tasks on their own time while the software records their screen and voice. Maze, UserTesting, and Lookback are the established options. Unmoderated testing is faster and cheaper per session but loses the ability to follow a thread of confusion as it appears. Most teams should run a small number of moderated sessions before a larger unmoderated sweep, not the other way around.
This is the newest category and the one with the most marketing noise. The promise of a full-process platform is that the cumulative context of a project — the empathy interviews, the personas, the problem statement, the ideation outputs, the prototype, the test results — lives in one place rather than being scattered across a research tool, a whiteboard, a Figma file, and a deck. The honest tradeoff is that a single integrated platform is rarely best-in-class at any one stage; the depth of a dedicated research repository or a dedicated whiteboard usually beats the breadth of a platform that does both.
Three patterns exist for solving the cumulative-context problem:
None of these is universally right. Teams running occasional design thinking projects on top of other work usually do best with the wiki pattern. Teams running design thinking as their core practice across many projects benefit most from either an integrated platform or a serious investment in the wiki pattern with strict templates. Teams of one or two often do better with disciplined folders than with any software at all.
AI has become a horizontal capability across the categories above rather than a category of its own. A whiteboard tool with AI sticky-note clustering, a research tool with AI transcript summarization, a prototyping tool with AI design generation, and a platform with AI guidance at each stage all use the same underlying capability for different jobs.
The honest evaluation question is not "does this tool have AI." It is "does this tool's AI feature actually save time on the specific job the tool exists to do, or is it a feature added so the box can be checked." The way to find out is to use it on a real project, not to watch the demo. Teams that adopt AI features successfully treat them as accelerants for divergent work (generating options, summarizing volume) and resist using them for convergent work (selecting which option matters, which transcript was the breakthrough). The deeper treatment of this distinction lives in the dedicated guide on AI in design thinking; the relevant point for tool selection is that AI features are not differentiators in 2026, they are table stakes, and they are no substitute for the discipline of the underlying method.
The right toolkit depends on three variables: how often the team runs design thinking projects, how distributed the team is, and how mature the team's facilitation practice is. Run through the following four questions before adding any tool to the stack.
- What job is this tool doing that the existing stack doesn't? If the answer is "the same job, slightly better," the tool is probably not worth the cost of switching. Tool churn has a hidden cost in lost team familiarity that almost always exceeds the marginal feature gain.
- Who else needs to work in it? A best-in-class tool that only the design team can use is a worse choice than a good-enough tool that stakeholders, product managers, and engineers will actually open. The number of accounts a tool requires from non-designers is a real cost.
- What happens when the project ends? Where does the artifact live a year from now, when the team has moved on and someone needs to understand what was decided and why? Tools that produce easily exportable, human-readable artifacts (a PDF, a markdown document, a printable canvas) age well. Tools that lock the work inside a proprietary format age badly.
- Can the team explain why this tool, and not the simpler alternative? If the only honest answer is "everyone seems to use it," the tool is probably overkill. The simpler alternative — a shared document, a paper canvas, a single-purpose script — is usually the right choice until a real constraint forces the upgrade.
A useful starting stack for a team running design thinking for the first time on a real project: a copy of the core templates printed or copied into a shared document, a video call tool with recording, a single whiteboard tool the team is already comfortable with, Figma if the prototype is a digital interface, and a one-page project summary that links to everything else. That stack covers all five stages, costs nothing or close to nothing, and scales further only when a specific friction point makes the next addition obviously worthwhile.
A few categories of software are routinely marketed as "design thinking tools" and rarely earn the label honestly. Project management software (Jira, Asana, Linear) is essential for shipping the result of a design thinking project but contributes nothing to the divergent and synthesis work that defines the methodology. Diagramming software (Lucidchart, Visio) handles a narrow slice of the work — system mapping, service blueprints — and is not where ideation or testing happens. AI chat assistants on their own are not a design thinking tool; they are a capability that other tools embed for specific jobs.
Calling these tools "design thinking tools" is not wrong exactly; it is just not specific. A team that adopts a generic project tracker and expects it to make their design thinking practice better is going to be disappointed, and the disappointment is the tool's fault only insofar as the marketing oversold the relationship.
[content truncated — full text in /llms-full.txt]
Related guides: usability heuristics · design brief · dot voting prioritization
Design Thinker Labs Home · All Guides · How It Works · Pricing