Previously, whenever my friend and I discussed “what we could do if we started a business someday,” every new idea or concept would earn me his familiar questions: “What if the big companies eventually get into it?” or “We can do it—won’t the big companies have an easier time?”
With the rise of large language models, by 2023 many developers and companies were using APIs to wrap these models. They claimed that “just putting a UI around GPT is enough to launch the product.” By 2025, buzzwords had shifted from chatbot to AI workflow to AI agent—a new species capable of independent thought, tool usage, and automated task completion.
Now, when I open Product Hunt, I see a host of “AIxx” startups trending every day—from generating PowerPoints and writing weekly reports to helping you book a flight or trade stocks. In 90% of these cases, the underlying logic is simply “API wrapping.”
Fueled by LLM APIs, an average coder or indie developer can now build an LLM-based app with ever-lower costs and barriers to entry. This has sparked a burst of innovation, even as it ushered in a flood of homogeneous creations.
At every hot new release, it’s always a small startup or solo developer leading the charge. Yet once a niche market is proven, the big companies usually come in. Google, OpenAI, Meta, ByteDance, Tencent, Alibaba, and so on, deploy various strategies and leverage their massive user bases. Internally, many of these giants are competing against each other. With infrastructure advantages, a slight adjustment in API pricing could even impact part of a startup’s revenue.
Recent headlines highlight two major news items: Google I/O 2025 and OpenAI’s $6.5 billion all-stock acquisition of an AI terminal company.
Google debuted AI Mode Search, venturing straight into AI + search. Their Project Mariner/Agent Mode can handle up to 10 web tasks concurrently and supports “teach & repeat.” At the device level, they showcased a head-mounted display in collaboration with Samsung (Project Moohan) and Gemini glasses in partnership with Gentle Monster and Warby Parker—demonstrating real-time translation and a private HUD.
Google is no longer just selling APIs; it’s integrating models → agents → search/hardware as a seamless chain, directly “occupying the user’s next screen.”
Over at OpenAI, they have acquired an AI terminal maker—the company founded by a former Apple designer and hardware team—with a deal valued at $6.5 billion in stock, marking OpenAI’s largest acquisition to date. Fifty-five key personnel in hardware, supply chain, and industrial design have joined. Their first AI terminal is expected to debut in 2026. Reportedly, they aim to break free from the iOS/Android sandbox to create “native AI interaction” hardware (widely speculated to be a wearable or portable device).
This indicates that model companies are no longer satisfied with “cloud APIs” alone; hardware is set to become the next trump card for locking in user scenarios, pricing power, and data feedback loops.
Within just two days, Google extended Gemini to eyewear and OpenAI snapped up an Apple-level hardware team—effectively sealing off the general entry point.
So if I were to discuss AI startup ideas with my friend again, he’d probably ask: When Google blocks the universal entry with its full-stack Gemini and OpenAI locks down AI terminals with a $6.5 billion investment, where do small teams find their opportunity?
Macro-Identification of Gaps Neglected by the Giants
Large companies like Google and OpenAI are now securing the general AI gateway through full-stack strategies (self-developed top-tier models, proprietary hardware, ecosystem acquisitions, etc.), making it hard for startups to win on the “large model” front.
Big companies typically focus on major markets, leaving certain regions and emerging markets as blue oceans. Chinese teams, honed by local competition, can quickly capture markets in Japan, Southeast Asia, and the Middle East using “high cost-effectiveness + scene adaptation,” often facing no local competitors. For example, a Chinese AI video company customized models for the Middle East (complying with local religious norms, such as facial covering) and, by leveraging low-cost domestic manpower for efficient delivery, crafted a unique overseas strategy combining “open-source technology + localized operations + supply chain.”
Such regional cultural and compliance customizations are gaps that the giants struggle to fill quickly.
Most agents today are general-purpose, yet large general models rarely delve into every vertical industry's detailed processes. AI applications tailored to niche sectors naturally offer footholds for startups.
Startup companies can focus on specific industry scenarios, providing end-to-end solutions. For instance, one startup, Cycle Intelligence, zeroed in on the sales process at car dealerships. By integrating AI hardware and workflow systems, they captured roughly 80% of the market. Their wearable AI device analyzes sales conversations in real time, automatically generates daily reports, and assists with morning meetings—all dramatically boosting sales efficiency at dealerships.
Although such niche scenarios require a deep understanding of dealership SOPs and sales pain points, as well as significant manual data labeling for model fine-tuning, this “dirty work” is exactly what big companies are reluctant to tackle. Deep vertical expertise and meticulous local operations build a moat that giants find hard to replicate.
As Zhu Xiaohu once said, “Every AI application is just a wrapper—technical barriers are overrated… What’s truly scarce is the ability to embed AI into real-world scenarios.”
In highly regulated industries such as healthcare, finance, and government sectors where data security and compliance are paramount, giants often progress slowly due to risk and investment concerns. A startup with the necessary certifications and deep industry insights can be the first to offer an AI solution in these fields. For example, in medical imaging or compliance auditing, by securing professional certifications and using small-sample fine-tuning, startups can meet regulatory requirements, prompting giants to opt for partnerships or investments. The areas “that giants are unwilling or unable to tackle” present ample room for startups.
Additionally, some firms leverage their own entrenched advantages and resources. Usually, these are not early-stage startups, but rather companies that have spent considerable time in a particular sector and thus possess unique industry data resources—advantages that giants can’t quickly replicate. Startups, on the other hand, can aggregate scattered industry data or use proprietary customer data to build exclusive datasets for model training, thereby forming a data moat—a challenging but potentially rewarding endeavor.
SignalFire has noted that startups should focus on acquiring high-value data that’s difficult to replicate, such as by integrating industry data from multiple sources or generating new data through their own business processes. For example, U.S. startup EvenUp built a proprietary settlement dataset around personal injury claims to train legal prediction models, offering lawyers unprecedented insights. This data accumulation boosts model accuracy in ways that general-purpose giant models can’t match.
Although large models are powerful, they have limitations. Different industries require different kinds of data. Startups that secure unique data in specific niches can craft AI platforms with distinct characteristics, achieving strong differentiation.
Consider also the interaction modes. For example, there’s now AI+canvas solutions like Flowith that recently introduced an “infinity agent” (every company wants to be the first, and adding various qualifiers sure makes it sound unique 🤣). This agent has received decent reviews, and I particularly appreciate its multi-threaded chat approach, which aligns well with human thought processes.
Giant companies tend to follow mainstream interaction patterns (such as web-based chatbots, mobile assistants), but emerging interaction methods or niche terminal devices could offer a breakthrough. There are prototypes such as voice-enabled chat robots for the elderly, AR glasses combined with AI assistants for industrial settings, or intelligent NPCs in gaming—these interactive scenarios haven’t yet been completely dominated by the giants.
Startups can design differentiated human-machine interactions or integrate unique hardware to provide a distinct AI experience, thereby avoiding direct competition at the same user entry points as the giants. Meanwhile, selecting under-saturated distribution channels (such as software ecosystems in specific industries or B2B channels) can serve as effective entry points that bypass the giant-dominated mainstream traffic.
Open-source and widespread adoption of foundational models is rapidly diminishing technical barriers. Google once admitted in a leaked memo, “We don’t have a moat, and neither does OpenAI.” The pace of iteration in the open-source community has addressed challenges that giants consider “major problems.”
Thus, startups can leverage open-source models to build upon the shoulders of giants for developing vertical applications instead of needing to train models from scratch. While the “arms race” for large models is led by the giants, more agile, ground-level scene innovation now presents an excellent opportunity for startups to break out.
Using Classic Models to Evaluate Track Feasibility
Identifying a gap doesn’t automatically mean you should pursue it—what follows is a rational evaluation of the feasibility and long-term value of that track. You can borrow classic analytical frameworks proposed by renowned venture capitalists to examine multiple dimensions.
“Come for the Tool, Stay for the Network” Strategy
As proposed by investor Chris Dixon, this growth strategy suggests initially offering a standalone tool to attract users, then gradually building a user network or community to create network effects. Early on, a single-user scenario tool helps attract seed users; later, by introducing features such as collaboration, content sharing, or transactions, long-term value is created for users, forming competitive advantages.
A classic example is Instagram, which initially attracted users with its filter tool and retained them through its built-in social network, eventually evolving into a massive community.
For AI entrepreneurs, the advice is to first focus on a practical AI tool (such as code-generation assistants or design material generators) to gain users. Once a sufficient user base is established, introduce features such as user interaction, model feedback for improvement, and a content-sharing marketplace to build platform-level network effects.
As Dixon once said, a single-user tool may light the match, but a network is the bonfire that drives long-term retention and moat creation.
Timing vs. Market Fit
“The key to entrepreneurial success is aligning your entry timing with market maturity.” — Elad Gil
Market trends can be fleeting. Entering too early may lead to failure due to immature users and technology; entering too late might mean missing the window of opportunity. Bill Gross, founder of Idealab, analyzed the successes and failures of hundreds of startups and found that timing was the most critical factor, accounting for 42% of the success variance.
For instance, the online video startup Z.com in 1999 had an excellent team and execution, but was doomed by low broadband penetration and collapsed in 2003. Just two years later, with Flash technology and widespread broadband, YouTube emerged and succeeded.
Even if an AI application addresses genuine needs, if the market environment (user habits, supporting technologies, infrastructure, regulations, etc.) isn’t mature, the project might struggle.
Entrepreneurs must assess whether the market timing is right: for example, an AI doctor’s assistant might be easier to promote after policies and ethical guidelines are clarified; enterprise AI software may only see mass deployment once customer concerns about data privacy diminish.
“No matter how brilliant an idea is, if the timing isn’t right, it can still fail.”
Thus, when evaluating a track, analyze whether the current market is in a state of rapid growth—or at least on the cusp—and allow enough time for user adoption to take off.
Market-Founder Fit
Both venture capital veteran Marc Andreessen and angel investor Naval Ravikant emphasize that the alignment between the founders’ backgrounds and the chosen market directly influences entrepreneurial success.
Naval noted, “Entrepreneurs must not only find product-market fit but also achieve what some call ‘product-market-founder fit’.” The chosen track should closely align with the team’s unique strengths, resources, and passion. If the founding team has deep expertise in a specific industry or unique insights into a particular customer segment, then their chances of success in that field are much higher.
Andreessen has also hinted that truly great startups often stem from founders who have a profound connection and obsession with a particular issue—a resonance that drives them to overcome challenges.
When choosing a track, ask yourself: Is this field one I understand and am passionate about? Do I have the industry connections, domain expertise, or unique insights? The stronger the “founder-problem fit,” the greater the chance of developing irreplaceable soft advantages during product refinement and market expansion.
Beyond these three models, traditional factors like market size and growth (i.e., the track’s ceiling and pace) should also be considered, alongside your own resource base, to determine whether the track is worth a long-term commitment.
When both classic models and qualitative judgments signal positive prospects, you can proceed with greater confidence.
Building an MVP Around the Gap and Creating a Moat
After choosing a track, quick action is required to validate the idea with an MVP (Minimum Viable Product) and gradually build a moat.
Rapid Prototype Validation
Leverage existing large-model platforms or open-source models to quickly build a prototype product and test market fit on a small scale with target users.
Today’s abundant foundational models and development frameworks allow startups to call APIs from platforms like OpenAI or Baidu’s Wenxin, or use open-source models (such as the LLaMA series) and tools (like LangChain or Haystack) to cobble together a prototype.
This “don’t reinvent the wheel” approach significantly lowers the barrier to developing AI prototypes, allowing entrepreneurs to focus on differentiating features and user experience.
The open-source community is already solving many of the “major problems” we once faced.
Entrepreneurs should fully leverage these communal resources to rapidly launch demos that address specific pain points, engaging in iterative conversations with seed users to validate the direction.
Focusing on a Single Pain Point
A strong MVP should directly address a core pain point. In the chosen gap, identify the most pressing or frequent issue that users face and optimize it using AI techniques.
For example, Cycle Intelligence’s team discovered that 4S dealerships wasted hours daily filling out sales reports and that morning meetings were inefficient. Their MVP zeroed in on the “automatic conversation recording and daily report generation” feature—a killer function that was small enough in scope yet highly valuable.
By keeping the MVP focused on one or two critical features rather than trying to solve everything at once, startups can prove the value of AI empowerment with limited resources, thereby setting the stage for future expansion.
Data Accumulation and Iteration
Once the MVP validates user demand, swiftly move into a cycle of data accumulation and model refinement. Collect interaction data, feedback, and edge cases from real user sessions in the vertical scenario.
This primary data is a unique asset for the startup and can be used to supervise, fine-tune, and optimize the model.
In the case of the 4S dealership sales assistant, massive amounts of real sales conversation data were collected, and the team invested heavily in manual annotation of key dialogue segments for model training.
This “data grunt work” progressively builds a competitive moat—a model finely tuned to understand and optimize for the sales context in 4S dealerships, far surpassing what a general-purpose model could achieve.
Entrepreneurs should develop a clear data strategy: which key data can be captured through product usage, whether manual labeling or synthetic data generation is needed to boost model capacity, and how to let customers contribute data while ensuring privacy and compliance.
As your exclusive dataset grows in scale and quality, the model’s performance will improve, creating a virtuous cycle for your product.
Model Fine-Tuning and Performance Optimization
With a solid data foundation in place, continuously work on fine-tuning the model. This may include applying transfer learning or incremental training techniques to incorporate industry-specific data into a pre-trained model, as well as constraining outputs through business rules for optimal performance.
A finely tuned model will significantly outperform general models in accuracy and expertise, providing a qualitative edge to your product.
For instance, U.S.-based legal AI company EvenUp relies on exclusive legal case data to fine-tune its model, enabling extremely precise compensation predictions in personal injury cases—offering insights that lawyers cannot match through intuition alone.
The fine-tuning process requires repeated experimentation, balancing model size, performance, and cost until the optimal configuration is found. In some cases, training a smaller, specialized model for on-premises or edge deployment can improve responsiveness and reduce dependency on external resources. This model-level refinement, combined with ongoing data accumulation, will become one of your product’s technical moats.
Of course, this step isn’t always essential; sometimes, the exclusive data accumulated is sufficient to widen your moat without the need for further model training.
Embedding into Processes and Building Stickiness
To secure long-term customer retention, AI products need to become deeply embedded in users’ business processes or daily workflows.
Once your product becomes integrated into a customer’s system—enabling process automation or creating a decision-making loop—the switching costs for that customer become very high, thereby cementing product stickiness.
For instance, early on, Meituan transformed merchants’ IT backends by integrating its services into their daily operations, ultimately creating an ecosystem that was hard for competitors to unseat. Similarly, an AI startup that provides an end-to-end solution (covering data input, AI processing, and result application) can generate comprehensive value for the customer and secure a strong market position.
Entrepreneurs should consider how the product can integrate with a customer’s existing software, databases, and workflows, whether through APIs or plugins, and how to train users to adopt AI tools into their daily routines.
Once your AI system becomes the starting point or an essential node in a customer’s workflow, it will not only continuously collect valuable data for improvement but also secure its position in contract renewals and extended sales.
Hybrid Human-AI Collaboration
In areas where technology is still maturing, consider adopting an “AI plus human” hybrid service model to safeguard user experience. In scenarios where AI-generated outputs are not 100% accurate or capable of handling complex tasks, incorporating human review and assistance can effectively bridge the gap.
For example, in the current stage where AI-generated videos are still imperfect, some startups opt to have freelance studios in smaller cities do final edits on AI-produced video material—employing a “95% AI + 5% human” model to meet customized client needs.
Although this approach may reduce the “pure AI” appeal, it ensures pragmatic service quality that wins customer trust and buys valuable time for further technical refinement.
Sometimes, “non-technical moats” can become a viable survival strategy for smaller players: by introducing human intervention and operational support to deliver personalized services that giants are unwilling to provide, startups can build a robust reputation and strong customer relationships. As the model matures, incrementally increasing the degree of automation can eventually facilitate a transition from “humans guiding AI” to “AI guiding humans.”
Don’t be afraid to use human support to package your MVP—as long as it delivers value and generates revenue, it contributes to your moat.
In short, early on, leverage external resources to swiftly validate demand; then in the mid-term, build internal capabilities, gather data, and optimize your model; finally, embed deeply into customer processes to create lasting stickiness.
Independent Company vs. Giant Ecosystem Add-On
A critical decision when choosing your entrepreneurial direction and business strategy is: Should this project evolve into an independent company, or is it better suited as a feature or component within a giant’s ecosystem?
Are you building a business meant for long-term independent growth, or do you envision eventually being absorbed by a major player?
Is the Function Easily Replicable?
If your startup’s offering is merely a “small feature” within a giant’s product line, then surviving independently will be challenging.
Prominent investor Mark Suster warned entrepreneurs with the concept of “FNAC (Feature, Not A Company),” noting that some ideas are nothing more than product features that cannot sustain an independent company.
Take, for example, the once popular group-buying and group-chat apps. Essentially, they were features that large platforms could quickly replicate, and eventually, many of these startups either had to evolve into comprehensive products or were absorbed by larger companies.
Suster candidly stated, “Group texting is just a feature, not a company… Sending messages is a generic tool. Once Apple integrated the functionality with iMessage, it became extremely difficult for those startups to compete.”
If your AI product’s capabilities can be easily replicated by a giant through a small team or a quick acquisition, you need to carefully reconsider its standalone viability.
For example, an AI drawing assistant that simply wraps public models with a UI might soon be integrated into a larger software suite, squeezing out the market. Such “feature-based” startup projects are more likely to be acquired quickly after gathering user momentum or to form partnerships with giants, rather than striving for independent longevity.
Does the Product Possess an Inimitable Moat?
If your project involves deep technical or data barriers or requires complex service delivery that even a giant would find hard to duplicate in the short term, then it has the potential to stand on its own.
For instance, a product that relies on long-term data accumulation and model optimization may be difficult for a giant to replicate quickly, if at all; similarly, a business that depends on extensive offline operations might lack the organizational capability for rapid replication by a giant.
Cycle Intelligence’s case in 4S dealerships illustrates this point—by accumulating years’ worth of data and deep industry know-how, they built a market position that large companies found hard to challenge.
EvenUp’s AI legal service, which leverages exclusive data alongside specialized AI augmentation for professional legal services, presents a model where, even if a giant were to enter, it would struggle to match the comprehensive data and domain expertise.
If your project shows unique network effects, data monopolies, or deep service integration with a sufficiently large market, then it qualifies to be run as an independent company with a continually reinforced moat.
Does the Track Conflict with a Giant’s Core Interests?
Some projects are inherently more suited as components because they ultimately thrive on the giant’s ecosystem to amplify their value.
For instance, plugins or assistant tools built around a giant’s large model may be completely dependent on the platform’s support. In such cases, remaining independent may limit their growth potential. Instead of fighting alone, aligning closely with the platform—perhaps through early strategic investments or partnerships—could be more advantageous.
Conversely, if the track is outside the core areas of interest for giants or if they are hampered by organizational inertia and an inability to innovate in niche areas, these “areas that giants either don’t want or can’t do well” become fertile ground for independent startups.
As Zhu Xiaohu noted, in the AI era, the tide is turning—whereas in the past a giant’s interest in your field was a danger signal, now giants can’t possibly dominate every niche market. These peripheral areas offer a prime opportunity for small companies.
Thus, evaluate the giants’ strategic maps: if your product is merely the icing on the cake, eventually a giant might do it themselves; however, if it targets a specialized, in-depth segment away from the giants’ main routes, then you can confidently focus on building and dominating that niche.
Capital and Exit Considerations
From a funding and exit strategy perspective, the project’s attributes can be inferred.
If your business plan projects limited revenue over the next five years and your target users are closely tied to a giant’s products, investors might prefer an acquisition exit strategy—growing the company to the point where it becomes an attractive acquisition target for a larger firm, especially since integration into a giant’s ecosystem can unlock greater value.
Conversely, if the market’s horizon is vast and the company has the potential to become an industry platform, then the goal should be an IPO or long-term independent operation, and funding should come from investors who appreciate long-term growth value.
Recent trends illustrate this well: AI startups in the general chatbot space have struggled to compete with the giants, with many eventually being acquired. For instance, Character.ai, after raising $150 million, was fully integrated into Google; Inflection, after raising $1.3 billion, was acquired by Microsoft; Adept, valued at $415 million, was acquired by Amazon—and the founding teams all ended up joining the giants.
Entrepreneurial ventures that directly challenge a giant’s core business are often destined for acquisition.
Thus, entrepreneurs need a clear self-assessment: if you aim to build an independent giant, you must choose the right battleground and create barriers; if you’re merely filling a gap in the giant’s ecosystem, early collaboration or a partnership might be the smarter choice.
Overall, the decision between going independent versus serving as an add-on hinges on the scale of value and the degree of substitutability.
If the track can generate independent, giant-level value that is hard to substitute, then pursue the independent path for long-term growth; otherwise, if the project is destined to be a “piece of the puzzle” within a larger ecosystem, then a flexible acquisition or partnership strategy should be adopted to maximize benefits for shareholders and the team.
Case Studies and Strategic Recommendations
Case 1: LibLib AI (Vertical Market Penetration)
Deeply understanding the needs of a specific audience.
LibLib is an AI drawing software designed for approximately 20 million professional designers in China, often dubbed the “AI version of Photoshop.”
Its core competitive edge isn’t based on the underlying image generation model (which has become ubiquitous thanks to open-source and popular models like Stable Diffusion and Midjourney), but rather on how deeply it aligns with local designers’ workflows and needs.
By offering a localized interface, a library of materials that adhere to Chinese design standards, and interactions tailored to the habits of designers, LibLib quickly won over a large user base.
Even when foundational technologies become widespread, focusing on the in-depth needs of a specific professional user group can still yield a sticky, specialized tool.
Their strategy was to “enter with a tool”—first attracting designers with an excellent AI drawing tool, and later embedding community features like designer interactions and material exchanges to gradually build an ecosystem. This validates the “tool first, network later” strategy in AI vertical applications.
Case 2: Cycle Intelligence (Human-AI Integration, Industry Focus)
Building a moat through hard work.
Cycle Intelligence focused on the sales process at car dealerships as its entry point, and after years of dedicated work, achieved an 80% market share nationwide.
Their approach involved launching a wearable AI device to assist sales—a solution that required full-stack capabilities combining both software and hardware, paired with keen insights into dealership sales processes and staff habits.
In the iterative process, the team invested significant manpower in data annotation and on-site support to solve real issues like transcription accuracy and analysis of sales conversations. These labor-intensive tasks, which big companies typically shy away from, became the moat that kept competitors at bay.
Don’t be afraid to start small and tackle the dirty work that others avoid; that’s how you break through in a red ocean.
As the saying goes, “In the vast sea, the final battleground is always a red ocean. Hard, unglamorous work is what builds a moat.” In the AI era, earning customer trust through diligent service outweighs relying solely on technical prowess.
Case 3: EvenUp (Data-Driven Legal AI)
Leveraging data advantages to disrupt traditional industries.
EvenUp targets the challenges in legal litigation by using AI to predict settlement amounts, providing lawyers with decision-making insights.
Their secret lies in building a proprietary dataset that no one else has: by partnering with multiple law firms, they collected and anonymized a vast array of historical personal injury compensation data.
With models trained on this data, EvenUp is able to offer compensation predictions that are more comprehensive than what any single law firm could provide. Initially offering tools for report drafting and case analysis to attract lawyers, the platform is gradually forming a network effect—model refinement through user feedback and a growing community of legal professionals sharing insights—thereby reinforcing its position as a critical infrastructure in the legal industry.
Their dual approach of “data moat + network effect” is simple: first attract users and data with a valuable tool, then use that data to further enhance the tool’s value, culminating in an indispensable platform in the vertical industry.
Case 4: Challenges for Plugin Startups Around Large Models
Some startups have chosen to build plugins or applications directly within the ecosystem of large models from giants—for example, plugins based on ChatGPT or AI assistants for Office suites.
While these projects might quickly capture users from the giant platforms in the short term, they face a high risk of being outflanked if the platform later develops a similar feature in-house.
One AI writing assistant startup, for instance, enjoyed rapid popularity only to lose users when Office released a similar built-in feature. This demonstrates that if you’re building an add-on within a giant’s ecosystem, you must be prepared for imminent replication.
A smart approach is to complement the giant’s ecosystem rather than compete head-on. Several excellent ChatGPT plugins have even received official recommendations or investments from OpenAI, becoming part of the ecosystem and enjoying robust growth.
If your project is positioned as a platform plugin, seek an ecosystem win-win by offering professional value that the platform is unlikely to replicate quickly, while also preparing a pivot strategy to survive once the giant deepens its involvement.
Strategic Path Recommendations
- Start small and iterate quickly: Choose a narrow but pressing pain point and build an MVP that serves real users. Refining your product in real-world conditions is more valuable than designing an elaborate business model in isolation. Take small, rapid steps; iterate based on user feedback to find true product-market fit.
- Leverage existing tools and platforms: Maximize the use of open-source models, large company APIs, and low-code development frameworks to lower technical barriers. Resources like Hugging Face’s model library or frameworks like LangChain can enable small teams to build sophisticated AI applications quickly. Instead of reinventing the wheel, focus on honing differentiating features and the user experience.
- Build data and community assets: Intentionally accumulate data assets—even if it requires initially using free services or manual efforts—and develop a dedicated early user community. Loyal users not only provide valuable data but can also help spread the word and contribute ideas for product improvements. For example, Hugging Face’s community model shows how forums and sharing platforms can drive strong network effects.
- Embedded innovation: Integrate AI capabilities into existing industry workflows, rather than forcing users to adopt a new process. Get close to the customer’s frontline operations, understand how each step can be optimized with AI, and seamlessly embed your product into their existing systems. For B2B services, consider integrating with existing software (like CRM or ERP systems) so that AI features appear within a familiar interface. This embedded approach can accelerate adoption and foster deep binding.
- Ensure cash flow and self-sustainability: In the current investment climate, the ability to commercialize matters more than conceptual brilliance. Explore monetization strategies with paying customers as early as possible—even if that means starting with small deals. Options such as SaaS subscriptions, usage-based billing, or result-based pricing (e.g., based on lead generation or cost savings) should be explored to identify the optimal path. Ensuring positive cash flow is essential for long-term survival.
- Keep learning and adapting: The AI landscape changes rapidly, so stay informed. Follow analyses and podcasts from leading venture firms like a16z or Sequoia—many of which have dedicated AI reports—as well as insights from Y Combinator. In China, pay attention to leading voices via platforms like Zhihu or GeekPark. Adapt external insights to your unique context, and always be wary of one-size-fits-all advice. As Elad Gil says, “All startup advice needs to be contextualized.”
- Choose the right tools and infrastructure: Select cloud and development tools based on your product’s needs. If large-scale model inference is required, consider GPU cloud services from providers like Alibaba Cloud or AWS, along with startup support resources; if real-time edge performance is critical, platforms like NVIDIA Jetson may be appropriate. Employ ML Ops tools—such as Weights & Biases for training monitoring or DVC for data versioning—to boost efficiency. The key is not to chase the newest technology but to pick what best matches your team’s capabilities and business requirements. Keep your initial system simple to avoid over-engineering.
In an AI era dominated by giants, small and medium-sized startups can carve out their own space through differentiated strategies.
By identifying macro gaps overlooked by the giants and rapidly validating ideas with lean methods on the micro level—backed by data and service deep-dives, and guided by classic entrepreneurial methodologies—the chances of success are greatly enhanced.
In this ever-changing tech tide, combining agile focus to counterbalance massive competitors with insightful scene-based approaches to surmount model stacking may well be the entrepreneurial legend waiting to be written.