- The Paradox of “AI Welcome” and “Distrust” Arriving in the Consulting Industry
- The “Three Transparencies” Clients Truly Seek
- Practical Framework: A “Trustworthiness Checklist” for AI-Powered Consulting
- Learning from Leading Examples: The Direction Indicated by GUGA’s “GenAI HR Awards”
- The Next Move as an Executive: Cultivating an Eye for Judging the “Quality” of AI Use
The Paradox of “AI Welcome” and “Distrust” Arriving in the Consulting Industry
A paradigm shift is quietly underway in the consulting industry. According to a report by Diamond Online, a significant 67% of large corporate clients responded that they “greatly welcome” consulting firms’ use of AI. This indicates that AI has evolved from a mere internal tool to an element that redefines the relationship with clients.
However, behind this welcoming mood lies “stringent demands” from clients. The processes and deliverables of AI-powered consulting are being scrutinized more strictly than ever before. From the perspective of executives and CTOs who utilize external consultants, this is a natural consequence.
Speaking from the position of a company that uses AI internally to achieve a reduction of 1,550 hours in annual workload and a 2,989% ROI, the potential of AI is undeniable. However, at the same time, the quality of AI-generated output heavily depends on the input data and prompt design. It is this fundamental understanding that underpins why clients “welcome AI use” while also presenting “stringent demands.”
The “Three Transparencies” Clients Truly Seek
So, what specific demands are being made? Delving deeper into the reports and considering our own consulting experience, we find that the following three transparencies are at the core.
Process Transparency: “How” AI Was Used
This is the most basic yet most critical demand. Clients are concerned that consultants might simply be presenting answers obtained by querying ChatGPT. What is sought is the “visualization of the process,” such as the rationale for selecting AI tools, the logic behind prompt design, and the verification process for generated results.
For example, when I use AI for contract review, I always document and share the following process with the client:
- AI Tool Used: Claude 3.5 Sonnet (chosen for its excellence in understanding legal documents)
- Input Data: Full contract text + related legal database
- Prompt Design: Specific instructions like “Extract risk points from the perspective of Japan’s Antimonopoly Act”
- Human Verification: AI-identified issues are legally re-evaluated by a qualified lawyer on staff
This “process transparency” is the first step toward building trust in AI use.
Data Transparency: “What” Was Given to the AI
The quality of AI output is directly linked to the quality of input data. Clients have the right to know how their confidential information is provided to the AI and how it is protected. This concern becomes particularly apparent when using public tools like OpenAI’s ChatGPT in business, where data might be used for training.
The solution is clear. For highly confidential projects, using “AI models that operate in a local environment” where data does not leak externally, or “enterprise-grade AI tools” that guarantee data protection for businesses, is essential. The monthly cost can be several times that of general versions (e.g., ChatGPT Team plan is about $25 per user/month), but this should be considered a necessary expense for purchasing trust.
Value Transparency: “To What Extent” AI Use is Reflected in Pricing
This is the most delicate issue. When business efficiency is significantly improved by AI, where should the resulting cost savings go? If a consulting firm monopolizes the fruits of internal efficiency gains and charges the same fees as before, clients will likely perceive it as “double-dipping.”
The ideal approach is to openly acknowledge the efficiency gains from AI use and build a model that shares the benefits with the client. For example, proposals like “Initial analysis using AI completed the current state diagnosis in half the usual time. Accordingly, fees are reduced by 20%, or resources are allocated to additional deep-dive analysis” could be considered.
Practical Framework: A “Trustworthiness Checklist” for AI-Powered Consulting
What should executives and CTOs specifically check when selecting and evaluating external consultants? Please refer to the following checklist.
Items to Confirm During Selection
- Documented AI Usage Policy: Does the firm publicly disclose its policy on AI use?
- Disclosure of Tool Stack: Which specific AI tools (Claude, ChatGPT Enterprise, proprietary models, etc.) will be used?
- Data Security Measures: Specific contractual clauses regarding the handling of confidential information.
- Definition of Human Involvement: What kind of verification/editing by experts is performed on AI-generated outputs?
Evaluation Points During Project Execution
- Regular Provision of Process Reports: Are interim progress reports on AI-assisted analysis and summaries of prompts used being shared?
- Presentation of Alternatives: For major proposals generated by AI, are verification results from human experts offering different perspectives also presented?
- Explanation of Cost Structure: How are the efficiency gains from AI use reflected in the project’s outputs and costs?
Learning from Leading Examples: The Direction Indicated by GUGA’s “GenAI HR Awards”
This trend toward transparency is not limited to the consulting industry. GUGA’s “GenAI HR Awards 2026” is an initiative that recognizes leading examples of AI use in human capital strategy. What is evaluated here is not merely the fact that “AI was used,” but both the process and the outcomes—”how AI was used and what human capital value was created.”
What this award suggests is that the maturity evaluation criteria for AI use are shifting from tool implementation to “the quality of the value creation process.” This applies to consulting as well, and we can predict an era where the “methodology” of AI use itself becomes a differentiating factor.
The Next Move as an Executive: Cultivating an Eye for Judging the “Quality” of AI Use
To effectively leverage consulting in the AI era, it is essential for executives themselves to understand the basic mechanisms and limitations of AI. This does not require advanced technical knowledge. Understanding the following three points is sufficient.
First, AI is a tool that generates answers “probabilistically.” Even the same question can yield different answers due to subtle differences in prompts. Second, AI output always requires “fact-checking.” Verification against primary sources is essential, especially for numerical data or legal interpretations. Third, excellent AI utilization is born from “human-AI collaboration.” It’s not about entrusting everything to AI; a crucial loop involves human expertise setting the direction, AI executing tasks, and humans verifying and adjusting the results.
Experience using AI in your own company’s operations is a powerful asset when evaluating external consultants. We recommend starting with in-house practice using AI tools costing around $130 per month to gain an intuitive understanding of AI’s potential and limitations. That experience will cultivate a discerning eye for judging the quality of AI use in high-value consulting contracts.
The future of consulting will be determined not by “whether AI is used,” but by “the quality and transparency of its use.” By clearly articulating this new evaluation criterion, client executives and CTOs can build truly valuable consulting relationships in the AI era.


Comments