My boss dropped a complicated fuel loading project on my desk and asked for a project scope document. Not a rough outline—a comprehensive scope that would go to all stakeholders, clarify the project intent, address TSSA specifications, inspection requirements, piping, valves, timelines, costs, and risk areas.
I had maybe two hours of knowledge about half of those topics.
I started the way I always do: rough draft first. Outlined the project, identified costs and timelines, flagged long-lead items, noted areas of risk, wrote an overall description. Two hours later, I had something that captured what I knew.
Then came the hard part. Taking that rough framework and making it accurate, comprehensive, and professional—especially given my limited knowledge of TSSA specifications and technical requirements—would have consumed a full 40-hour week. Maybe more.
I dumped the rough draft into AI.
Two hours later, after several edits and sense checks to make sure the AI output actually reflected reality, I had a polished, professional document. It went out to all stakeholders. It’s been used repeatedly since then to clarify project intent and keep everyone aligned.
AI had just become my most productive team member. A team member who could research TSSA specifications, structure complex information, and produce professional technical documentation in a fraction of the time it would take me to do it manually.
Here’s the problem: that same AI doesn’t sign NDAs. It doesn’t work for my company. It doesn’t have any obligation to protect the information I just fed it.
And if I’m being honest, I probably fed it information that my clients and employer would be very uncomfortable knowing went into a system I don’t control.
The Capabilities Are Incredible
Let me tell you what I’m using AI for right now, because it’s not just one fuel loading project. AI has become integrated into how I manage projects.
Technical Documentation and Compliance: I’m using AI to create professional technical documents for inspectors, develop project scope documents, research requirements, and handle communication with regulatory bodies like TSSA. The time savings are massive, and the quality of output is consistently professional.
Automated Reporting Dashboard: I’ve built an AI-powered report dashboard that generates status updates for all stakeholders. It creates PDFs, pushes them to Google Drive, and I have a script that monitors the drive and automatically emails the most recent PDF to designated individuals. When it works, it’s beautiful—everyone gets timely updates without me manually creating and distributing reports.
Lessons Learned System: I’m developing a system that helps me create and organize a lessons learned database. The next phase is using that database to prompt relevant lessons learned to appropriate tasks as projects progress. The idea is that when someone starts a similar task, the system surfaces the lessons we’ve already learned so we don’t repeat mistakes.
These aren’t theoretical use cases. This is how I’m working right now. AI has made me more efficient, more organized, and frankly better at my job.
And that’s exactly why it’s dangerous.
But AI Doesn’t Sign NDAs
Here’s what was in that fuel loading project scope document I created:
- Specific equipment specifications
- Timeline and delivery commitments
- Cost estimates and budget allocations
- Risk areas and mitigation strategies
- Regulatory compliance requirements
Now here’s the uncomfortable question: what happens to that information once it goes into an AI tool?
If I were to have used a free AI service. Free services aren’t private. Then the information I provide to help the AI understand my project context—all those specific details that made the output relevant and accurate—that information doesn’t disappear when I close the browser window.
If you wouldn’t send a document to a consultant without an NDA, why would you send it to an AI without understanding where that information goes and who can access it?
The problem isn’t that AI is malicious. The problem is that AI is a tool, not a team member. It doesn’t have loyalty to your company. It doesn’t understand confidentiality. It doesn’t know which information is sensitive and which is safe to share.
It just processes whatever you feed it.
The Automation Trap
That automated reporting dashboard I mentioned? It’s a perfect example of how AI capabilities create new risks.
The system works like this: AI generates a status report, saves it as a PDF to Google Drive, a script monitors the drive, and when a new PDF appears, it automatically emails it to stakeholders.
When everything works correctly, it’s fantastic. Stakeholders get timely updates without me manually creating and distributing reports every week.
But here’s what keeps me up at night: what if the wrong file ends up in that monitored folder? What if the email list includes someone who shouldn’t have access? What if the script malfunctions and sends sensitive financial information to the entire stakeholder list instead of just the project manager?
Automation multiplies efficiency. But it also multiplies the impact of mistakes.
And here’s another challenge I’m dealing with: firewalls. The system works beautifully outside firewalls, but behind firewalls—where most manufacturing facilities operate—it becomes complicated. Some systems won’t allow linking to HTML because it’s internet-capable. File sending can be done automatically, but it may not be possible behind a firewall without significant IT involvement and security reviews.
These technical constraints exist for good reasons. They’re designed to prevent unauthorized access to sensitive information. But they also highlight the tension between AI innovation and security requirements.
What’s Actually at Risk?
Let me be specific about what’s in my project files that should never end up in an AI system without careful sanitization:
Customer names and program details. My clients don’t want their projects, timelines, or strategic plans exposed to competitors or the public. When I use AI to help structure a project scope, I need to remove identifying information first.
Proprietary part designs and specifications. Technical specifications, dimensional requirements, material selections—this is competitive intelligence. If this information leaks, it could compromise client relationships and competitive positions.
Financial information. Quotes, costs, profit margins, budget allocations. This is sensitive internally and externally. I would never put raw financial data into an AI tool.
Personnel information. Names, roles, contact information, performance assessments. Personal information should never go into AI systems I don’t control.
Lessons learned containing competitive intelligence. This might be the sneakiest risk. Lessons learned databases contain years of accumulated knowledge about what works, what fails, how long things really take, what problems to anticipate. This is exactly the kind of information competitors would love to have. And it’s exactly the kind of information I want to feed into AI to make it more useful.
The more valuable AI becomes for managing your projects, the more sensitive information you’re tempted to feed it. That’s the trap.
My Sanitization Process (Cumbersome But Necessary)
Here’s what I do before putting any project information into AI:
I take the document or data and systematically remove:
- All customer names and facility identifiers
- Specific part numbers and proprietary specifications
- Financial figures (or replace with percentage ranges)
- Personnel names and contact information
- Proprietary process details that could identify the application
I replace specific information with generic placeholders:
- “Customer A” instead of actual company names
- “Component X” instead of part numbers
- “Manufacturing facility” instead of specific locations
- “Regulatory body” instead of naming TSSA or specific inspectors
Is this cumbersome? Absolutely. It adds time to every AI interaction. It breaks my workflow.
But here’s the reality: the time I spend sanitizing information is still less than the time AI saves me. And more importantly, it’s the price of using a powerful tool responsibly.
If I can turn a 40-hour documentation project into 4 hours total (2 hours rough draft + 2 hours AI editing), then spending 30 minutes sanitizing the input is still a massive net gain.
But I have to actually do it. Every time. No shortcuts because “this document isn’t that sensitive” or “I’m just using it for research.”
Treating AI as a Stakeholder
I’ve spent 35 years learning to engage stakeholders properly. I know that communication without consideration of confidentiality damages relationships. I know that sharing information with the wrong people creates problems that take years to repair.
AI needs to be treated as another stakeholder in your project communication plan. Not because AI cares about confidentiality—it doesn’t—but because the systems and people behind AI have access to what you share.
You wouldn’t CC your competitor on a client email. You wouldn’t send proprietary specifications to a consultant without an NDA. You wouldn’t share financial information with unauthorized personnel.
AI deserves the same confidentiality respect you give to any other stakeholder who might see your information.
This means:
- Understanding where your data goes when you use free AI tools
- Knowing who has access to the information you provide
- Recognizing that “free” usually means “we use your data in ways you might not like”
- Treating AI interactions with the same care you’d use when sharing information with external parties
The manufacturers who figure out this balance—using AI effectively while protecting sensitive information—will have a significant competitive advantage. Those who ignore the risks will learn expensive lessons when client information leaks or competitive intelligence ends up in the wrong hands.
The Company Policy You Need Before Problems Happen
Most companies don’t have AI policies yet. But your team is already using AI tools whether you have policies or not. ChatGPT, Claude, Gemini, specialized industry tools—they’re being used right now for project planning, documentation, research, and communication.
The question isn’t whether to use AI. The question is whether you’ll have policies and practices in place before something goes wrong, or after.
Here’s what needs to be addressed:
What never goes into AI tools:
- Personal information about employees, clients, or contractors
- Customer names and identifying details without sanitization
- Proprietary specifications and technical data
- Financial information including costs, quotes, and margins
- Information covered by NDAs or confidentiality agreements
What requires sanitization before AI use:
- Project scope documents (remove customer identifiers)
- Technical specifications (remove proprietary details)
- Lessons learned databases (remove customer and project identifiers)
- Status reports (remove names and specific identifying information)
- Timeline and schedule information (remove customer context)
What’s acceptable for AI assistance:
- Generic research questions
- Template creation and formatting
- Structural organization of sanitized information
- Grammar and clarity improvements on sanitized documents
- Concept development using hypothetical scenarios
Access control and automation protocols:
- Who approves automated file sharing systems?
- What reviews happen before scripts email files automatically?
- How do we verify that file access permissions are correct?
- What happens when systems operate behind firewalls?
- Who monitors automated systems for security issues?
This isn’t about being paranoid. It’s about being professional. It’s about understanding that powerful tools require thoughtful protocols.
The AI Checklist I’m Adding to My Lessons Learned Database
I’m developing a checklist that I run through before using AI for any project task. Here are the key questions:
- What am I attempting to accomplish? Be specific. “Create a professional project scope document” is different from “research TSSA requirements.” The level of sensitivity varies with the task.
- Are files being moved, accessed, or automatically distributed? File movement and automation introduce additional risk vectors. Scripts that work perfectly can distribute files to wrong recipients. Access controls can fail.
- What is the sensitivity of the information? Rate the information: Public? Internal? Confidential? Proprietary? The sensitivity level determines how much sanitization is required.
- Is there a plan for sanitizing files that feed into AI? Don’t assume you’ll “just be careful.” Have a specific protocol. What gets removed? What gets replaced with placeholders? Who verifies sanitization is complete?
- Is there personal information required—for me or anyone else? If yes, stop. Find a different approach. Personal information should never go into AI tools you don’t control.
- What are stakeholders’ concerns about their information? Ask. Don’t assume. Some clients are comfortable with sanitized data going into AI. Others want zero AI involvement. Know before you act.
- Do stakeholders want their names cited in outputs? AI might generate documents that reference people by name. Some people don’t want to be associated with AI-generated content. Clarify this upfront.
- What is the project information sensitivity according to company policy? If your company has classified the information or project as sensitive, respect that classification. Don’t rationalize that “it’s probably fine.”
- Does company policy allow this use of AI? Check. If your company hasn’t addressed AI use yet, raise it. Don’t assume silence means permission.
This checklist isn’t comprehensive, but it’s a starting point. The goal is to make the security consideration automatic, not an afterthought.
The Technical Reality: Tools Aren’t Quite There Yet
I want to be honest about something: the vision of AI-powered project management is ahead of the current technical reality in some areas.
My lessons learned system is a great example. I can create and organize a database of lessons learned. I can feed that database into AI to help structure and categorize information. But the linking software—the tools that would automatically prompt relevant lessons learned to team members working on similar tasks—isn’t quite reliable yet.
The concept is sound: when someone starts a task similar to one where we’ve captured lessons, the system surfaces those lessons proactively. But the execution is still challenging. The linking mechanisms aren’t robust enough. The context recognition isn’t consistent enough.
I’m working on it, and I believe we’ll get there. But right now, it requires more manual intervention than I’d like.
Similarly, the firewall challenges with automated reporting aren’t fully solved. The system works beautifully in open network environments. Behind firewalls—where security is tighter and external connections are restricted—it becomes complicated quickly.
These aren’t reasons to abandon AI-powered project management. They’re reasons to be realistic about capabilities and limitations. They’re reasons to maintain human oversight and not assume automation means “set it and forget it.”
The tools are incredible. They’re also imperfect. That combination requires respect and careful management.
How to Get the Benefits Without the Risk
So how do you actually use AI effectively while protecting sensitive information? Here’s what I’ve learned:
Start with sanitization as default, not exception. Every time you prepare information for AI, assume it needs sanitization. Remove identifying information first, then ask whether the sanitized version still provides enough context for AI to be useful. Usually it does.
Use AI for structure and research, not for processing raw sensitive data. AI is excellent at helping you organize thoughts, research requirements, structure documents, and improve clarity. You don’t need to feed it actual customer names and proprietary specs to get those benefits.
Maintain human oversight on all AI outputs. AI can generate impressive-sounding content that’s factually wrong or inappropriate for your context. You’re the human in the loop. Your judgment, your industry knowledge, your understanding of stakeholder concerns—these can’t be delegated to AI.
Build review steps into automated systems. Before any file gets automatically distributed, have a human verification step. Yes, this reduces the automation benefit. But it also prevents the disaster of sending the wrong file to the wrong people.
Treat free AI tools as public platforms. If you wouldn’t post it on LinkedIn or say it in a public conference, don’t put it into a free AI tool. Assume anything you share could potentially be accessed by others.
Invest in understanding your tools. How does the AI you’re using handle data? Where is information stored? Who has access? What are the privacy policies? These aren’t theoretical questions—they determine what you can safely use AI for.
Document your AI use in project records. When you use AI to help create project documentation, note it. This creates accountability and helps teams understand what information has been processed through external tools.
Balance innovation and security, don’t choose one over the other. The goal isn’t to avoid AI because it’s risky. The goal is to use AI effectively while managing risk appropriately. Both innovation AND security, not innovation OR security.
The Competitive Advantage Is in the Balance
Here’s what I believe: the manufacturers who figure out how to use AI safely and effectively will have a significant competitive advantage over the next decade.
AI can dramatically improve project documentation, stakeholder communication, lessons learned capture and application, and resource management. These aren’t small improvements—they’re order-of-magnitude changes in efficiency and quality.
But the advantage goes to companies that implement AI thoughtfully, with appropriate safeguards and clear policies. Not to companies that rush into AI without considering the risks.
The competitive advantage isn’t just in using AI. It’s in using AI while maintaining client trust, protecting proprietary information, and ensuring security compliance.
That fuel loading project scope document that AI helped me create in two hours instead of forty? That’s real value. But that value disappears instantly if my client discovers I fed their project details into an unsecured AI tool without their knowledge or consent.
The balance matters. The protocols matter. The respect for confidentiality matters.
AI is transforming project management. It’s making me better at my job. It’s helping me deliver higher quality work in less time, which benefits my clients and my employer.
But AI is also a stakeholder that deserves careful management. It’s a tool that requires thoughtful protocols. It’s a capability that demands respect for the information we feed it.
The manufacturers who embrace AI while respecting these realities will thrive. Those who ignore the risks will learn expensive lessons about trust, confidentiality, and the real cost of “free” tools.
I’m adding AI protocols to my lessons learned database right alongside the technical lessons about injection molding and project launches. Because managing AI safely is just as important as managing stakeholders, timelines, and budgets.
It’s another aspect of front-loading the design process—establishing clear policies and practices before problems emerge, not after.
AI is the best project manager I’ve never met. Brilliant, tireless, incredibly productive. And exactly because it’s so good, it requires the same careful management, clear boundaries, and thoughtful protocols I’d apply to any other powerful resource.
The question isn’t whether to use AI in project management. The question is whether you’ll use it responsibly.
Are you using AI tools in your manufacturing operations without clear policies about information security? Launchpad Project Management helps companies develop practical protocols for AI use that balance innovation with confidentiality protection. Let’s talk about using AI effectively while protecting the information your clients trust you with. [Contact us to discuss AI protocols for your operations.](contact page link)
