In a world where artificial intelligence is reshaping industries and raising new questions of trust, ethics, and responsibility, open dialogue has never been more essential. On May 30th, 2025, the Hong Kong Association of Interactive Marketing hosted the inaugural Hong Kong AI Governance Conference: Shaping the Future of Responsible GenAI in Hong Kong. This landmark event convened thought leaders, policymakers, industry executives, and innovators to explore the next steps for trustworthy and dynamic AI development in Hong Kong.
The conference opened with welcoming remarks from Mr. Francis Fong, Chairman of the Hong Kong Association of Interactive Marketing, followed by an officiating address by Professor Sun Dong, Secretary for Innovation, Technology and Industry.
Attendees were then treated to two inspiring keynote presentations:
- Ir Tony Wong, JP, Commissioner for Digital Policy
Theme: "Balancing innovation and regulations – realizing the full potential of GenAI for Hong Kong" - Ms. Ada Chung, Privacy Commissioner for Personal Data
Theme: "New horizons of privacy protection in the era of GenAI"
Both keynote speakers emphasized not only the opportunities brought by generative AI, but also the need for Hong Kong to take a leading role in setting high standards for governance, privacy, and cross-sector collaboration. Their forward-looking perspectives helped set the tone for the entire conference, underlining the urgency and relevance of the day’s discussions.
Additionally, the conference included a feature presentation by Mr. HK Chan, Chief of Operations Engineering Services & Innovation, MTR Corporation, alongside expert panel discussions addressing business, privacy, and copyright issues in AI.
As moderator, I had the privilege to lead the panel, “Cultivating AI Literacy and Ensuring Privacy Protection in AI Development,” which was one of several highlights in this content-rich agenda. Together with three distinguished guest speakers, we explored how Hong Kong can foster AI literacy within organizations and society, while upholding privacy protection and ethical standards as AI becomes ever more integrated into our lives.Below is a multi-perspective dialogue from the panelists, highlighting their diverse insights and approaches.
Mr. Martin Liu – Building an Ecosystem for Responsible AI
As Assistant Director of AI & Data at the Hong Kong Science & Technology Parks Corporation (HKSTP), Mr. Martin Liu oversees an innovation ecosystem that is home to over 2,200 technology companies and startups. Martin’s perspective is that responsible AI starts with infrastructure and support systems that help startups and established companies alike move from ideas to impactful solutions.
He explained how HKSTP is not only investing in cutting-edge AI research and resources but is also committed to making sure these innovations translate into real-world applications. For instance, HKSTP provides startups with sandbox environments and pilot schemes, allowing them to test and refine their AI products safely before scaling up. Martin emphasized that this approach helps mitigate risks related to data privacy and compliance, which are especially critical in sectors such as healthcare and finance.
He also spoke about HKSTP’s role in AI talent development. By running regular workshops, hackathons, and collaboration programs with universities, HKSTP ensures that entrepreneurs and engineers are not just technically proficient but also understand the ethical dimensions of AI—particularly privacy protection, responsible data use, and the importance of earning user trust.
Martin highlighted some recent examples: an AI-powered medical diagnostic tool developed by a local startup, and a fintech solution using machine learning for fraud detection. Both projects benefited from HKSTP’s guidance on data privacy standards and compliance, showcasing how support at the ecosystem level can accelerate innovation without compromising trust.
What I found especially valuable in Martin’s comments was the reminder that building trust and capability goes hand in hand: the right technical infrastructure is important, but so too is the guidance on ethical and regulatory best practices that allows innovation to scale responsibly.
Ms. Fan Ho – Lenovo’s Journey Toward Responsible and Impactful AI
Ms. Fan Ho, Executive Director and General Manager of Lenovo’s Asia Pacific Solutions and Services Group, shared how Lenovo is embracing a holistic approach to AI—one that goes beyond technical prowess and emphasizes ethical responsibility, social impact, and transparency.
Fan described Lenovo’s multi-layered AI governance structure, including a dedicated AI Governance Board that oversees the ethical design, deployment, and management of AI solutions globally. This board is responsible for ensuring all Lenovo products and services adhere to six core principles: diversity & inclusion, privacy & security, accountability & reliability, explainability, transparency, and environmental & social impact.
She shared inspiring cases where Lenovo’s AI is making a tangible difference:
- Inclusive Education: Lenovo has collaborated with educational institutions to develop AI-based communication tools for children with autism and other special needs, helping them participate more fully in classroom and social settings.
- Healthcare Innovation: AI-powered diagnostic tools deployed by Lenovo are enabling faster, more accurate medical screenings, especially in resource-constrained environments.
- Sustainable Operations: AI is helping Lenovo optimize supply chains, reduce energy consumption, and minimize environmental impact—contributing to the company’s ESG (Environmental, Social, and Governance) goals.
Fan emphasized that AI literacy is a key part of Lenovo’s internal training, ensuring employees at all levels understand not just how to use AI, but also why ethical considerations matter. She acknowledged challenges—such as ensuring global consistency across markets and balancing rapid innovation with regulatory compliance—but stressed that a values-driven culture is the best safeguard against missteps.
Her remarks complemented Martin’s: while HKSTP offers the ecosystem and regulatory guidance, Lenovo demonstrates how a global enterprise can embed responsible AI practices deeply within its corporate DNA—bridging technology, business value, and social responsibility.
Ir Alex Chan – Defending the Digital Frontier with AI
As General Manager of Digital Trust & Transformation at the Hong Kong Productivity Council (HKPC), Ir Alex Chan focused on the practical and urgent side of AI adoption: cybersecurity and risk management.
Alex began by illustrating the “double-edged sword” nature of AI in cybersecurity. On one hand, AI-driven analytics can help organizations identify vulnerabilities, detect anomalies, and automate the response to cyber threats far faster than traditional methods. HKPC, for example, has developed systems that can spot phishing websites and prevent data breaches before they occur—a crucial line of defense for local businesses.
On the other hand, Alex warned that attackers are now using AI to increase the sophistication and scale of their attacks. Phishing websites, social engineering, and malware campaigns can be launched by AI-powered bots, making them harder to detect and stop. He stressed that this evolving threat landscape means organizations must be proactive, investing in not only the latest defensive technologies but also in continuous training for staff.
Alex offered concrete advice for SMEs, who may lack the resources of large enterprises. He recommended leveraging public resources—such as HKPC’s cybersecurity toolkits and government-issued guidelines—and forming peer networks to share intelligence. Importantly, he called for a shift in mindset: viewing cybersecurity as a collective responsibility rather than a one-off compliance task.
I was particularly struck by Alex’s call for vigilance and collective responsibility. While the opportunities of AI are vast, so too are the risks—making it essential for organizations of all sizes to invest in human capital, knowledge-sharing, and proactive defense.
Key Takeaways: Literacy, Trust, and Collaboration
A unifying message from all panelists was that AI adoption cannot succeed without a strong foundation of literacy and trust. It’s not enough for organizations to deploy new technologies; they must also invest in educating their people—raising awareness about privacy, ethics, and risk, from leadership to the front line.
Another theme was the power of collaboration—across sectors, between startups and corporates, and through ongoing dialogue with regulators and civil society. The road to advanced AI, including Artificial General Intelligence (AGI), requires not just technical breakthroughs but also shared values and trust-building at every step.
Looking Ahead
As moderator, I left the discussion inspired by the energy and conviction of our speakers. Their stories made it clear that responsible AI is everyone’s business: from boardrooms to classrooms, from innovation hubs to the daily operations of every enterprise.
Let us continue to champion AI literacy, strengthen privacy protection, and foster an open, ethical ecosystem where technology serves humanity. Together, Hong Kong can lead by example—showing that in the age of AI, responsibility and progress can, and must, go hand in hand.
Thank you to all panelists, organizers, and participants for making this session a success. Stay tuned for more insights and upcoming initiatives from the Hong Kong Association of Interactive Marketing.
By Dr. Ken Fong
Honorary Advisor & Convenor, GenAI Committee, Hong Kong Association of Interactive Marketing
中文摘要 : 培養AI素養與保障AI發展中的私隱保護——香港AI治理論壇全紀錄
2025年5月30日,香港互動市務商會舉辦了首屆「香港AI治理論壇」(Hong Kong AI Governance Conference),主題為「Shaping the Future of Responsible GenAI in Hong Kong」,匯聚了政府官員、行業領袖、專家學者與創新者,共同探討生成式AI與責任治理的未來發展。論壇由多場主題演講、企業案例分享與專題討論組成,深入剖析數碼政策、私隱保護、產業應用、AI倫理與創新合作等議題。
會議開幕由香港互動市務商會主席方保僑先生致歡迎辭,創新科技及工業局局長孫東教授主禮致辭。兩場主題演講則由數碼政策專員黃志光工程師及私隱專員鍾麗玲女士主講,分別聚焦「創新與監管平衡下的生成式AI發展潛力」以及「生成式AI時代的私隱保障新視野」,突顯本地政府推動AI創新與規管並重、重視市民個人資料保障的施政方向。
在眾多分組討論中,由本人擔任主持的「培養AI素養與保障AI發展中的私隱保護」專題,邀請三位來自業界與科技機構的領袖,剖析香港如何提升AI素養與推動企業及公眾的私隱保障。
首先,香港科技園公司(HKSTP)人工智能及數據部助理總監劉思健先生,介紹科技園作為本地最大創科生態圈,支持超過2,200家科創公司。他強調負責任AI不僅是技術落地,更要有完善的支援系統,包括「沙盒」試點、合規指導與人才培訓,協助初創公司安全測試與部署AI方案。HKSTP亦透過各類工作坊、與大學合作項目等,提升從業者AI素養及對私隱、合規的認識。他以AI醫療診斷及金融反詐騙等案例說明,企業只有兼顧創新與信任,才能持續發展。
來自聯想(Lenovo)亞太方案及服務事業部執行董事兼總經理何帆女士則分享,聯想積極落實AI倫理與企業社會責任(ESG),成立AI治理委員會,並制定六大原則,包括多元與包容、私隱與安全、問責與可靠性、可解釋性、透明度,以及環境與社會影響,確保全球產品及服務都符合道德與法規要求。何女士舉例,聯想推動AI於特殊教育、醫療篩查及供應鏈減排等領域發揮正面影響,例如協助自閉症學童溝通及醫療普及。聯想亦十分重視內部員工AI素養培訓,並坦言企業須持續在創新速度與合規監管間尋求平衡,方能避免倫理風險。
香港生產力促進局(HKPC)數碼信任及轉型總經理陳志偉工程師則著重AI於資安攻防上的雙重角色。他指出,AI可協助偵測網絡漏洞與阻截釣魚網站,但黑客也利用AI強化攻擊手法,使風險升級。陳工程師建議企業必須主動加強員工資安訓練,善用HKPC資安工具及政府指引,尤其中小企更要建立同業資訊交流與集體防禦意識。他強調資安不應只是合規任務,更是每間企業日常營運的集體責任。
三位嘉賓的經驗互相呼應,充分反映AI應用絕非單靠技術升級,更需建基於素養教育與信任。AI素養應由領導層至前線員工全面滲透,私隱和倫理意識需貫穿整個創新流程;同時,業界跨界協作、與監管機構及社會溝通,是香港AI生態圈邁向高階人工智能(AGI)發展的關鍵。作為主持人,我深感推動AI責任發展不僅是企業與政府的責任,更需全社會共同參與,讓香港在AI時代成為信任與創新的典範。
Keywords
Hong Kong AI Governance Conference, AI literacy, privacy protection, responsible AI, GenAI, generative AI, AI governance, artificial intelligence, data privacy, digital policy, AI regulation, HKSTP, Lenovo, HKPC, AI ecosystem, cybersecurity, digital transformation, AI innovation, AI ethics, ESG in AI, AI for business, AI in education, AI in healthcare, AI in Hong Kong, technology conference Hong Kong, AI talent development, AI best practices, AI collaboration, Artificial General Intelligence, AGI, AI trust, AI risk management, AI for SMEs, AI compliance, data security, digital trust, Hong Kong technology events
0 Comments