🇯🇵 日本語 🇬🇧 English 🇨🇳 中文 🇲🇾 Bahasa Melayu

The AI “Military Use” Debate Reveals New Standards for Corporate Governance

“No Good or Evil, But There is Responsibility”: The Essence of the AI Military Use Debate

Reports have emerged of a conflict between the US AI startup Anthropic and the US Department of Defense. The company is reportedly maintaining its stance to restrict the military use of its AI model “Claude,” leading to intense discussions with the government. Meanwhile, domestically, there is a pointed observation: “Generative AI itself is neither good nor evil—but there is responsibility for ‘executives who do not define how it is used.'”

At first glance, this might seem like a distant issue. However, these two news stories highlight the same fundamental question posed to every company executive: Have you clearly established a basic policy on “to what extent and in what ways your company will permit the use of AI”?

It is premature to dismiss the extreme example of military use as irrelevant to business applications. Risks such as your company’s AI unintentionally promoting discriminatory hiring practices, being used to generate a competitor’s confidential information, or creating false marketing documents—these lurk for any company that introduces AI with vague policies. The Anthropic conflict symbolizes the arrival of an era where AI developers must draw lines regarding the applications of their own technology. And this is a governance requirement equally demanded of companies on the “user” side of AI.

Learning “Use Case Limitation” from Square Enix’s Practical Example

So, what should you do concretely? A hint lies in the also newsworthy example of Square Enix Holdings (Square Enix HD). The company used AI to streamline the “clean-up and finishing work for ‘name’ (rough layouts and dialogue),” a task that consumed approximately 3,000 man-hours annually in its manga editing operations.

The crucial point here is that they established a clear constraint: “not to learn artistic styles.” Imitating a specific artist’s style carries risks of copyright infringement and undermining creator individuality. Therefore, the company strictly limited the AI’s use to editorial support tasks like “tracing lines” and “applying screen tones.”

This is an extremely insightful decision. While AI’s potential lies in “being able to do anything,” the true value creation in business comes from the management decision of “determining what not to let it do.” Square Enix practiced drawing a clear line: maximizing AI’s capabilities while not encroaching on the core of creativity or legal and ethical risks.

Try applying this to your own company. If introducing ChatGPT to the sales department, do you have a policy like “drafting personalized emails to clients is permitted, but generating fictional client case studies or performance data is prohibited”? For the HR department, a line might be “improving job posting copy is OK, but auto-generating reports analyzing applicants’ social media is NG.”

5 Practical Steps for Establishing an “AI Use Policy”

Vague policies leave frontline staff unable to act. Below are actionable steps for establishing an AI use policy.

1. Identify Risk Areas: Categorize your company’s operations (e.g., “information dissemination,” “customer service,” “internal document creation,” “data analysis,” “development”) and list the maximum foreseeable risks for each area (data leaks, misrepresentation, promotion of discrimination, copyright infringement, etc.).

2. Create a Tool-Specific Allowlist: Set usage permission levels by tool and department, e.g., “ChatGPT Enterprise permitted company-wide,” “Image generation AI limited to the design department,” “Code generation AI for development department only.” The principle should be to prohibit unrestricted use.

3. Formalize Rules for Input Data: This is most critical. Clearly prohibit inputting confidential information, personal data, or customer data into AI tools. In practice, even for our media operations, we use anonymized versions of contracts for review checks and never input raw data.

4. Establish an Output Verification Process: Mandate that all AI-generated documents, code, and analysis results must undergo final human review. Especially for content disseminated externally or data used as a basis for decision-making, a process to verify sources and合理性 is necessary.

5. Build a Continuous Review System: AI technology and related regulations change rapidly. Establish a system to review the policy quarterly, incorporating new risks and best practices.

Cost and Implementation: Concrete Measures Even SMEs Can Start Today

You might be thinking, “Governance sounds good, but we lack resources.” However, utilizing modern AI tools makes implementation possible at surprisingly low cost.

First, you can use AI to help draft the policy itself. For example, instructing Claude 3.5 Sonnet with, “We are a small-to-medium-sized manufacturer. We want to start using generative AI in our sales and administrative departments. Please draft a usage policy to mitigate risks,” can generate specific clause drafts tailored to your industry in minutes. Then, have the management team discuss and refine it.

Next, technical controls. Implementing ChatGPT Enterprise company-wide (approx. $30/user/month) allows settings like “disabling specific functions” or “restricting data transmission to external sites” from the admin panel. For finer control, opting for Microsoft Copilot for Microsoft 365 ($30/month) enables use of the “Business Chat” mode, which learns from and references only data within your company’s Microsoft 365 environment, significantly reducing data leakage risk.

Regarding cost, a practical approach is phased introduction starting with “approved members in approved departments” before distributing expensive enterprise versions to all employees. Based on our consulting experience, successful cases often start with a limited pilot project led by the back-office manager and CTO to test policy effectiveness.

What Anthropic’s Decision Teaches Us About the “Source of Competitive Advantage”

I believe Anthropic’s prioritization of its own policy over a government contract is not merely ethical but also a business strategic decision. It is about establishing the brand value of being a “responsible AI developer.”

Consumers and business partners now pay attention not only to technical capability but also to the philosophy behind how that technology is implemented in society. Clearly defining your company’s AI use policy and communicating it externally is no longer just a compliance cost. It can become a powerful differentiating factor that enhances corporate trust and attracts top talent and like-minded customers.

“Our company utilizes AI, but to maximally respect customer privacy and creativity, we strictly manage it under the following policy”—such a message will shape corporate value itself in the coming era.

The Executive’s Responsibility: Define the “Acceptable Scope,” Not the “Potential” of the Technology

Generative AI itself is neither good nor evil. However, there is clearly good and evil in the impact its outputs have on society. The conflict between Anthropic and the Pentagon, Square Enix’s use case limitations, and the discourse questioning the “responsibility of executives who do not define usage”—all converge on the same conclusion.

AI is a powerful management resource. But like any powerful resource, its handling requires clear guidelines and constant management. What is required of executives is not to talk about AI’s “potential,” but to decisively define its “acceptable scope” in light of their own business and values.

This cannot be delegated entirely to the technical department. Top management themselves have the responsibility to understand the risks, formulate the policy, and embed it within the organization. The debate over AI’s military use shows us that no one can escape this responsibility anymore.

Why not start by putting “Our Company’s Basic AI Use Policy (Draft)” on the agenda for your next executive meeting?

Comments

Copied title and URL