In Malaysia's dynamic business environment, small and medium-sized enterprises (SMEs) are known for their ability to adapt and be agile. The emergence of large language models (LLMs) has introduced a new element to business operations. These tools can quickly generate reports and draft content, making many tasks more efficient. However, the sheer volume of information LLMs produce, much of it difficult to verify, presents a fresh set of challenges for businesses.
At Yunzi Digital, our data engineering team's daily work has shown us that LLMs operate on a basis of probability, not absolute fact. Sometimes, they confidently create non-existent information or fabricate sources, which is a serious risk in the business world. A single piece of incorrect information can harm a company's reputation and lead to costly mistakes.
To help you better navigate this landscape and leverage AI effectively without its drawbacks, we've summarized some practical strategies. These methods are based on a data-driven, professional perspective, and we hope they provide a solid reference for you.
Many people treat LLMs like a "know-it-all" encyclopedia. But as we've learned, their answers are based on predicting patterns, not on factual accuracy.
To minimize errors, the most effective strategy is Retrieval-Augmented Generation (RAG). This sounds technical, but it's simple to implement: before you ask the LLM for information, you provide it with a credible, verified source. For example, if you want to understand the latest tax policies, don't just ask the AI. Instead, download the official tax guide from the Inland Revenue Board of Malaysia (LHDN) website, upload it to the model, and then give it a clear instruction: "Based on this document, please summarize the key tax considerations for a newly established company."
By doing this, you shift the LLM's role from "information creator" to "information organizer." It's no longer forced to invent information; it works from the authoritative data you give it. This significantly improves the accuracy of the output and ensures all content is grounded in reliable information.
In business, we know that what you hear isn't always true. The same principle applies to AI. Treat every LLM output as a preliminary piece of information that needs to be validated, not a final conclusion.
Manual Cross-Check: For any critical information—legal terms, business data, or supplier details—always verify it with at least two independent, trustworthy sources. For Malaysian businesses, this means double-checking with official government websites, like the Companies Commission of Malaysia (SSM), or reputable media outlets like The Star or Malay Mail.
Human-in-the-Loop Review: Never use AI-generated content—whether for internal memos or public-facing social media posts—without a final human review. This isn't a lack of trust in the technology; it's a sign of a robust and responsible workflow. Only a human can truly grasp local nuances, cultural context, and the subtle tone that is so important in the Malaysian market.
LLMs can be convincing because their outputs are often well-structured and confident. They are expert "performers," but you need to learn to see past the performance to the underlying logic.
Question the Source: Make it a habit to ask the LLM: "Where did you get this information from?" If it can't provide a specific, verifiable source or gives a vague answer like "according to a study," you should be cautious.
Request an Alternative: Try asking it to "explain that in a different way." If the LLM's explanation becomes inconsistent or it can't offer a different perspective, it might not truly understand the topic.
Ultimately, distinguishing fact from fiction is not just a technical issue—it's about human judgment and data literacy. We believe that empowering SMEs means more than just providing them with tools; it means helping their teams develop a new way of thinking.
Invest time in training your team to understand the strengths and weaknesses of LLMs. Teach them how to value and use first-party data (your company's unique, proprietary information). Make cross-verification a standard company habit. This isn't just about protecting your business; it's about building a more trustworthy and credible brand.
We believe that the future belongs to businesses that can intelligently guide technology, not be guided by it. LLMs are powerful tools that can boost your company's efficiency, but the final responsibility for the truth remains in your hands. We hope these insights serve as a useful reference for your journey in the digital age.