Alibaba Boosts AI Efforts with New Open-Source Models and Text-to-Video Technology

Alibaba Boosts AI Efforts with New Open-Source Models and Text-to-Video Technology

Alibaba Expands AI Push with New Open-Source Models and Text-to-Video Technology

Chinese tech giant Alibaba has ramped up its presence in the fast-growing generative AI market by releasing new open-source AI models and unveiling text-to-video AI technology. This move is part of the company’s strategy to compete with major players in the artificial intelligence space.

On Thursday, Alibaba introduced over 100 new AI models from its Qwen 2.5 family, which is its latest large language model first released in May. These models vary in size, from 0.5 billion to 72 billion parameters, making them capable of handling tasks such as mathematics, coding, and supporting over 29 languages.

Unlike some competitors like Baidu and OpenAI, which have focused on closed-source AI solutions, Alibaba has adopted a hybrid approach. It is developing both proprietary and open-source AI models to offer a diverse range of AI products for sectors such as automotive, gaming, and scientific research.

In addition to the new models, Alibaba also launched a text-to-video AI tool as part of its Tongyi Wanxiang image generation suite, entering the competitive market of AI-generated video content. This positions Alibaba against international competitors like OpenAI, which is also exploring text-to-video technology.

Chinese tech companies are increasingly investing in AI technology to strengthen their portfolios. In August, ByteDance, the parent company of TikTok, launched its text-to-video app Jimeng AI for Chinese users, marking a growing trend among Chinese tech firms to develop AI-driven video content.

With these latest releases, Alibaba aims to position itself as a major player in the AI industry, competing with global leaders and expanding its reach across various industries.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *