The biggest European conference about ML, AI and Deep Learning applications
running in person in Prague and online.
Machine Learning Prague 2024
In cooperation with Kiwi.com
– , 2024
RegistrationWorld class expertise and practical content packed in 3 days!
You can look forward to an excellent lineup of 40 international experts in ML and AI business and academic applications at ML Prague 2024. They will present advanced practical talks, hands-on workshops and other forms of interactive content to you.
What to expect
- 1000+ Attendees
- 3 Days
- 40 Speakers
- 10 Workshops
Phenomenal Speakers
Practical & Inspiring Program
Friday
Workshops
O2 Universum, Českomoravská 2345/17a, 190 00, Praha (workshops won't be streamed)
Registration –
Room D2 | Room D3 | Room D4 | Room D6 | Room D7 | |
---|---|---|---|---|---|
–
coffee break – |
Using Graph Neural Networks to improve telecommunication networksRoom D2
Massimiliano Kuck, Sopra Steria SE This workshop aims to provide an in-depth understanding of modeling and optimizing telecommunication networks using Graph Neural Networks (GNNs). Telecommunication networks characterized by a complex structure of interconnected nodes can be naturally modelled as graph data structures. However the utilization of graph data structures in Machine Learning poses unique challenges. The workshop focuses on various aspects of integrating GNNs into the telecommunication networks to enhance their performance and customer satisfaction. The implementation of GNNs introduces a new level of sophistication that allows intelligent decision making effective network mapping prediction and optimization. The key component of the workshop is a practical use case where participants will work with a Neo4J graph database. This database aids in visualizing and analyzing the complex connections in the network to provide a base for the GNN. With the use of GNNs valuable insights and predictions on network activity can be made efficiently. They leverage the rich information present in the network structure. Using these predictions to optimize network configuration settings and address network problems could potentially result in lower operational costs improved network efficiency and higher customer satisfaction. The workshop aims for participants to learn the practical application of modeling telco networks as graph structures implementing GNNs and interpreting their predictions for optimization. Moreover it's noteworthy that graph structures are not confined to telecommunications they can be found spanning across any industry sector and GNNs have versatile applicability in different use cases. It is an excellent opportunity for practitioners researchers and anyone interested in graph data modeling and machine learning to learn from industry experts. By the end of this workshop participants will have a more profound understanding of the potential of GNNs enabling them to apply this technology to optimize various network systems. |
Real-Time Anomaly Detection in PythonRoom D3
Tomáš Neubauer, Quix Get to grips with real-time anomaly detection in Python by working through a use case for detecting cyclist crashes. In this hands-on workshop we will learn how to build a streaming data pipeline using Kafka to handle telemetry events from a bicycle sensor/fitness app. From here we will collect data to label and train an ML model and deploy it to predict crashes as they happen in real time. |
Chatting with your Data: A Hands-on Introduction to LangChain.Room D4
Marina Volkova, HumbleBuildings In this interactive workshop we are covering how to handle models to work with your own data whether it's text PDFs CSVs or SQL database. We'll show you how to efficiently run Language Models (LLMs) on a standard 16GB RAM machine. During the workshop you will gain hands-on experience with the LangChain framework and have the opportunity to create an application that allows you to interact with your data through natural language. The beauty of this workshop lies in its accessibility as we leverage low-code frameworks that make it suitable for a broad audience. All you need is a laptop with an internet connection access to the Google Colab environment and optionally an OpenAI API key to achieve even more impressive results. Join us on this exciting journey of data interaction and exploration! |
Empowering Question Answering Systems with RAG: A Deep Dive into Retrieval Augmented GenerationRoom D6
Peter Krejzl, Emplifi Join us for a workshop on Retrieval Augmented Generation (RAG) a groundbreaking approach in machine learning that seamlessly integrates information retrieval with text generation. Dive deep into the construction of advanced question-answering systems that leverage private knowledge bases moving beyond mere document retrieval to generate coherent and natural language responses. Throughout the workshop participants will benefit from hands-on experiences utilizing real datasets and the latest LLM techniques ensuring practical comprehension. By harnessing the power of semantic search and private databases RAG-based system promises a new user experience and proprietary context-specific solutions. - Introduction to RAG: We will begin by introducing the concept of Retrieval Augmented Generation (RAG) and its importance in the landscape of modern machine learning solutions. - Dual Strength: RAG uniquely combines an information retrieval component with a text generation model offering a two-pronged approach to problem-solving. - Private Knowledge Bases: Traditional systems rely heavily on public datasets. In this workshop participants will learn to build systems that leverage private knowledge bases ensuring proprietary and context-specific responses. - Beyond Simple Retrievals: It's not just about finding the right documents. Our RAG-based system will not merely retrieve relevant articles but will craft answers in coherent and natural language enhancing user experience. - Practical Implementation: We will guide attendees through the process of building a question-answering system powered by semantic search and the latest LLM (Language Model) techniques. - Hands-on Experience: Participants will get hands-on experience working with real datasets and observing live demonstrations ensuring a practical understanding of the RAG system. - Future Potential: The workshop will conclude with a discussion on the future potential and advancements of RAG in diverse applications. Join us for the workshop and elevate your understanding of how RAG is reshaping the frontier of question-answering systems. |
Automated Evaluation of LLM based systemsRoom D7
Marek Matiáš, O2/Dataclair Development of complex LLM based solutions cannot be done without robust system for evaluation of the outputs. Whenever you make changes to models prompts or other components of the system you need a metric to find out whether the performance improved overall. We will outline possible solution to this problem including broader picture as well as hands-on exercise. |
– |
|
||||
–
coffee break – |
Building OpenAI Plugins: Deep Dive into Microsoft Semantic Kernel (SK)Room D2
Daniel Costea, European Agency Microsoft Semantic Kernel (SK) is a new technology that enables the integration of AI Large Language Models (LLMs) like GPT-3.5-Turbo GPT-4 and DALL-E 3 from OpenAI or Azure OpenAI with conventional programming languages like C# Python and Java. SK brings together several key components to provide planning and execution capabilities. These components include a robust kernel that provides the foundation for all other components plugins (formerly known as skills) for performing specific tasks connectors for interfacing with external systems memories for storing information about past events steps for defining individual actions and pipelines for organizing complex multi-stage plans. In this hands-on workshop we explore how to build various plugins for: - building a semantic interface for an existing API using plugins and execution plans containing semantic and native functions. - building a GPT-powered chat enriched by real-time information and memories enhanced through RAG (Retrieval-Augmented Generation) capabilities. - building a cutting-edge generative model using DALL-E 3 and multi-modal input. |
Finetuning Open-Source LLMs to small languagesRoom D3
Petr Simecek, Mediaboard Large Language Models (LLMs) represent a remarkable advancement in artificial intelligence (AI) boasting the capability to generate and comprehend human language. Derived from extensive training on vast text and code datasets these models excel in a variety of tasks such as translation summarization and question answering. However a major limitation arises when these LLMs predominantly trained on English data are applied to other languages particularly smaller languages like Czech. Notable models like ChatGPT Bard and Claude exhibit proficiency in Czech with minimal grammatical and stylistic errors. Yet many contemporary open-source LLMs influenced heavily by English-centric datasets fail to address even basic Czech queries.So what are the choices? At Monitora our initial experiments with ChatGPT for text summarization have now transitioned to Llama2 7B primarily due to privacy considerations. We are also evaluating Mistral models introduced in September. Within this workshop I will give introduction into QLoRA adapters and demonstrate that instruction finetuning of such models is possible even with limited resources (a single consumer GPU). We will see that models originally speaking a very broken language improve significantly by this process. In addition to the technical insights I envision this workshop as a collaborative forum. Rather than just being a traditional presentation it aims to be a platform for knowledge exchange. Attendees are encouraged to contribute their insights plans or experiences related to the application of LLMs in small languages. For structure I would like to limit each interested participant to 5 slides and a presentation time of 5 minutes. |
Unlocking the Power of Active Learning: A Hands-on ExplorationRoom D4
Fabian Kovac, St. Pölten University of Applied Sciences In today's ever-evolving landscape of Artificial Intelligence and Machine Learning staying at the cutting edge is not just an advantage it's a necessity. Active Learning has emerged as a powerful technique that has the potential to revolutionize how we train machine learning models. With this hands-on workshop we are providing attendees with a comprehensive understanding of its relevance benefits and practical applications. Active Learning is a paradigm-shifting approach that focuses on enabling machines to learn more efficiently from limited labeled data by actively selecting the most informative examples for annotation. It plays a pivotal role in various industries and research domains offering solutions to some of the most pressing challenges in AI and machine learning. By actively involving human experts in the loop Active Learning not only reduces annotation costs and efforts but also accelerates model development making it particularly relevant for resource-constrained environments where only limited labeled data is available. This workshop will be a hands-on immersive experience providing a solid foundation of the theoretical underpinnings ensuring attendees grasp the core concepts. Participants will have the opportunity to apply Active Learning techniques to real datasets gain practical experience in selecting informative data points training models and observing the impact on model performance. Furthermore we will share best practices and pitfalls to avoid when implementing Active Learning in real applications. This workshop promises to equip attendees with the knowledge and skills they need to harness the full potential of Active Learning in their research and industry applications. By the end of the workshop participants will be well-prepared to incorporate this cutting-edge technique into their AI and Machine Learning endeavors accelerating progress and achieving superior results. |
Practical Tips for Deep Transfer LearningRoom D6
Yauhen Babakhin, H2O In this workshop we will start by introducing Transfer Learning and its benefits in Deep Learning. Then we will have an interactive session discussing and implementing best practices for tuning such models and improving their results. The workshop will be accompanied by an in-class Kaggle competition in image classification which will have a ready-to-use code baseline in PyTorch. Participants will be able to directly apply the concepts we discuss to real data. Additionally they will have the chance to introduce their own improvements to the solution baseline and share their findings afterwards. |
Power of Physics-ML, a hands-on workshop with open-source toolsRoom D7
Idrees Muhammad, Turing Artificial Intelligence In this workshop participants will learn how to combine the power of neural networks with the laws of physics to solve complex scientific and engineering problems using open-source tools. Physics-Informed Neural Networks (PINNs) have gained popularity in various fields including fluid dynamics material science and structural engineering for their ability to incorporate physical principles into machine learning models. Attendees will learn how to leverage open-source tools to build and train PINNs enabling them to model and solve complex physical systems efficiently. |
Saturday,
Workshops
O2 Universum, Českomoravská 2345/17a, 190 00, Praha (and on-line)
Registration from 9:00
Welcome to ML Prague 2024
LLMs, Reasoning, and the Path to Intelligence
Murray Campbell, IBM T. J. Watson Research CenterEnhancing Semantic Search: A Case Study on Fine-tuning with Noisy Data
Barbora Rišová, Seznam.czQuantum-inspire and Quantum Machine Learning
Alexander Del Toro Barba, GoogleLUNCH & POSTER SESSION
Supercharging Recommendation Systems with Large Language Models
Amey Dharwadker, MetaPractical LLM Fine-Tuning For Semantic Search
Roman Grebennikov, Delivery Hero SEPerspective Taking in Large Language Models
Lucie Flek, University of BonnCOFFEE BREAK
Application of Machine Learning in Metagenomics
Enes Deumić, Genome Institute of SingaporeTranslating Mobile Network Signals to Roads with Transformers
Stefan Josef, DataclairDeveloping and Up-Scaling RNA-Based Vaccines and Therapeutics Production Using Big Data and Generative AI
Andreea Mihailescu, Johns Hopkins UniversityCOFFEE BREAK
Building Responsible and Safe Generative AI Applications
Mehrnoosh Sameki, MicrosoftProtecting Privacy with AI During Testing of Automated Cars
Mateus Riva, ValeoNetworking & drinks
Sunday,
Conference day 1
O2 Universum, Českomoravská 2345/17a, 190 00, Praha (and on-line)
Doors open at 08:30
Deep Learning Discovery of New Exoplanets
Hamed Valizadegan, NASASmall-Data Deep Learning and Its Applications to Diagnostic Aid and Virtual AI Imaging
Kenji Suzuki, Tokyo Institute of TechnologyInstant Insight: The Rise of Real-Time Online ML
Madalina Ciortan, Kiwi.comCOFFEE BREAK
Churn Detection and Explanation for a Modern Taxi-hailing Company
Martin Plajner, LogioA Modular and Adaptive Bayesian System for Spear Phishing Detection
Jan Brabec, CiscoBuilding an Efficient Geometric Deep Learning Pipeline
Idrees Muhammad, Turing Artificial IntelligenceLUNCH & POSTER SESSION
Advanced RAG: Your Company's Ultimate AI Assistant
John Sinderwing, EnteconInsights into Scam Detection: Training Large Language Models with Limited Datasets
Branislav Bošanský, GenInterpretation of HR data by Large Language Models
Ludek Kopacek, WorkdayCOFFEE BREAK
PANEL DISCUSSION
Murray Campbell, IBM T. J. Watson Research CenterHamed Valizadegan, NASA
Madalina Ciortan, Kiwi.com
Kenji Suzuki, Tokyo Institute of Technology
CLOSING REMARKS
Have a great time Prague, the city that never sleeps
You can feel centuries of history at every corner in this unique capital. We'll invite you to get a taste of our best pivo (that’s beer in Czech) and then bring you back to the present day to party at one of the local clubs all night long!
Venue ML Prague 2024 will run hybrid, in person and online!
The main conference as well as the workshops will be held at O2 Universum.
We will also livestream the talks for all those participants who prefer to attend the conference online. Our platform will allow interaction with speakers and other participants too. Workshops require intensive interaction and won't be streamed.
Conference building
O2 Universum
Českomoravská 2345/17a, 190 00, Praha 9
Workshops
O2 Universum
Českomoravská 2345/17a, 190 00, Praha 9
Now or never Registration
Early Bird
Sold Out
-
Conference days € 270
-
Only workshops € 200
-
Conference + workshops € 440
Standard
Sold Out
-
Conference days € 290
-
Only workshops € 230
-
Conference + workshops € 490
Late
Sold out
-
Conference days € 320
-
Only workshops € 260
-
Conference + workshops € 520
What You Get
- Practical and advanced level talks led by top experts.
- Networking and drinks with speakers and people from all around the world.
- Delicious food and snacks throughout the conference.
They’re among us We are in The ML Revolution age
Machines can learn. Incredibly fast. Faster than you. They are getting smarter and smarter every single day, changing the world we’re living in, our business and our life. The artificial intelligence revolution is here. Come, learn and make this threat your biggest advantage.
Our Attendees What they say about ML Prague
Thank you to Our Partners
Co-organizing Partner
Venue Partner
Platinum Partners
Gold Partners
Silver Partners
Communities and Further support
Would you like to present your brand to 1000+ Machine Learning enthusiasts? Send us an email at info@mlprague.com to find out how to become a ML Prague 2024 partner.
Happy to help Contact
If you have any questions about Machine Learning Prague, please e-mail us at
info@mlprague.com
Organizers
Jiří Materna
Scientific program & Co-Founder
jiri@mlprague.com
Teresa Pulda
Event production
teresa@mlprague.com
Gonzalo V. Fernández
Marketing and social media
gonzalo@mlprague.com
Jona Azizaj
Partnerships
jona@mlprague.com
Ivana Javná
Speaker support
ivana@mlprague.com
Barbora Toman Hanousková
Communication
barbora@mlprague.com