Keynote Speakers


    Stefan Poslad
    Senior lecturer, Queen Mary University of London
    (more Info.)
    Title: In an Age of Many Successful Big Data AI Model Rollouts & Apps – Is There Still a Case for Utilising Small Data-driven and Distributed AI?

    The current dominant trend for AI is for a small amount of organisations, either industrial, institutional or national, but seldom citizen-driven ones, to amass big data, and to apply increasingly sophisticated (e.g., deep machine learning) AI algorithms, executed in centralised (cloud-based) high computation resources, to solve real world problems that are more general, hence, the term AG(=General)I. And yes, some problems are able to be solved better this way but is this the most suitable way to solve the many outstanding critical environmental & human societal issues?

    In this talk, some reflections will be offered to counter and complement the current dominant (big data, deep) AI model trend. This includes a discussion of the multi-dimensional aspects of human/animal/plant intelligence that so far still surpass current AGI, the motivation for distributed, small data (Io)-driven) AI, a discussion of some models & applications to enable this, and a future outlook.

    Stefan Poslad leads the Internet of Things (IoT), more specifically the IoT2US (Internet of Things to Ubiquitous, Computer, Science) Lab at Queen Mary University of London, where he is a fellow of the Digital Environment Research Institute (DERI), and Cognitive Science research group. He is a fellow of the UK AI Alan Turing Institute and is regarded by Stanford University as being in the top 2% of scientists worldwide. His current research and innovation focus is AI-driven, data science-driven and geoscience-driven models, to enhance how we physically sense and analyse human location and motion behaviour shifts, and physical environment changes, both outdoors and indoors. He has been the lead researcher for QMUL in over 15 international collaborative projects with industry. He is the sole author of a leading book on Ubiquitous Computing: Smart Devices, Environments and Interaction that has over 1000 research citations that is in use for teaching by over 70 institutes worldwide, across 6 continents.



    Chanikarn Wongviriyawong
    CEO of EATLAB, THAILAND
    www.eatlab.io
    (more Info.)
    Chanikarn "Mint" Wongviriyawong is a founder and CEO of EATLAB, the first spinout from KMUTT to serve leading food and beverage production companies by scientifically measuring consumer response to their new products. She provides consultancy to various agencies about innovation, data analytics, affective and cognitive modeling, and commonsense artificial intelligence. Prior to joining KMUTT, she was a quantitative trader, and co-founded a startup, Matternet, while at Singularity University. Matternet is a drone delivery network now partnering with Mercedes-Benz, and WHO. Dr. Wongviriyawong received her academic training in Mechanical Engineering. She graduated a PhD from MIT in Mechanical Engineering with a minor in Political Science, a master´s degree from MIT Media Lab, and a bachelor degree from Carnegie Mellon University with double minors in Robotics and Computer Science.



    Phitchaya Mangpo Phothilimthana
    Research scientist at Google DeepMind
    mangpo.net
    (more Info.)
    Title: Machine Learning for Machine Learning Compilers at Google

    Abstract:
    Search-based techniques have been demonstrated effective in solving complex optimization problems that arise in domain-specific compilers for machine learning (ML). Unfortunately, deploying such techniques in production compilers is impeded by several limitations. In this talk, I will present an autotuner for production ML compilers that can tune both graph-level and subgraph-level optimizations at multiple compilation stages. We demonstrate how to incorporate machine learning techniques such as a learned cost model and various learning-based search strategies to reduce autotuning time. Our learned cost model has high accuracy and outperforms a heavily-optimized analytical performance model. In an evaluation across 150 ML training and inference models on Tensor Processing Units (TPUs), the autotuner offers up to 2.4x and an average 5% runtime speedup over the heavily-optimized XLA compiler. I will outline how we deploy the learning-based XLA autotuner at datacenter scale to automatically tune the most heavily-used production models in Google’s fleet everyday. The deployed tile size autotuner has been saving approximately 2% of fleetwide TPU compute time.

    Phitchaya "Mangpo" Phothilimthana is a research scientist at "Google Brain," where she leads Machine Learning for Machine Learning Compilers effort (one of Google Brain moonshots in 2020). Her research interests include compilers, machine learning for systems, program synthesis, and energy-aware computing. She completed a PhD in Computer Science at UC Berkeley. Her dissertation focuses on synthesis-aided compilation and programming models for emerging architectures, ranging from an ultra-low-power processor to a programmable network card. Dr. Phothilimthana was a recipient of Microsoft Research PhD Fellowship and Qualcomm Innovation Fellowship.

>> JCSSE2023 Sponsors: