Welcome to the Huawei Zurich Tech Arena, where innovation meets challenges in the world of AI and network technology! 

This year, we’re excited to present two groundbreaking challenges designed to push the boundaries of model compression and network scalability. Compete with the best minds in tech as you explore solutions that could redefine the future of large-scale AI processing.

With a Docker environment ready to go, all the tools are at your fingertips. Join us at Huawei Zurich Tech Arena to show off your skills, compete for the top spot, and make your mark on the future of AI and network innovation!

WHY JOIN?

Collaborate

with industry leaders and top innovators from across Europe.

Showcase your expertise

in front of a leading tech company and its decision-makers.

Get hands-on experience 

solving challenging problems in the world of AI and network technology! 

Gain insights

into the latest technologies and trends shaping the future of AI. 

Compete

for a chance at a €28,000 prize pool and potential career opportunities within Huawei. 

Network

with like-minded individuals across multiple countries.

WHO CAN PARTICIPATE?

Students

Apply only if you are in a Master's or PhD program.



Students from Switzerland, the UK, and Germany are invited to register! Conquer alone, no teams allowed. 

Studying...

  • Networking & Communication Systems
  • Computer Science
  • Computer Engineering
  • Software Engineering
  • Data Science
  • Electrical Engineering
  • Computer Systems
  • Information Technology
  • Systems Engineering
  • Machine learning
  • Artificial intelligence
  • Computational Mathematics

CHALLENGES

Challenge 2: Communication affined Direct Topology of NPUs 

Participants will be asked to define a general direct network topology framework that defines the connections among switch nodes, such that the cluster can achieve the maximum network scale(supports maximum number of GPUs in the cluster / approximately approaches the Moore bound).


Modeling the communication efficiency of the network topology for AllReduce and AlltoAll primitives;Devise the AllReduce/AlltoAll algorithm that achieves the ideal/optimal performance.


Describe the modularity of the network topology (how it can be constructed into practice).

Compare the proposed network topology with classical topologies (e.g., CLOS, Dragonfly, Dragonfly+) regarding their advantages, disadvantages, and application scope.

Challenge 1: The Compression of LLM Inference

The challenge consists in implementing one-shot compression of the Llama-3.1-B model with the goal of achieving the highest compression rate at the lowest accuracy degradation with respect to the original model.


To implement one-shot compression, participants are allowed to implement pruning and weight-only quantization, but cannot re-train and fine-tune the compressed model to improve its accuracy.


Participants are required to integrate their solution in the popular lm-evaluation-harness benchmarking framework. A docker container is provided that contains the model development and evaluation environment.

PRIZE POOL

1st Place

6,000€

1 spot per challenge

2nd Place

4,000€

1 spot per challenge

3rd Place

2,000€

2 spots per challenge

AGENDA

1

Registrations Open: November 4th 

2

Challenge 1 released: November 18th 

3

Challenge 2 released: November 25th 

4

Registrations close: November 28th

5

Final Submissions: December 2nd 

6

Finalists announced: December 9th

7

Onsite Demo day: December 16th

FINAL HACKATHON

ONSITE VENUE

Huawei Zurich Research Center, 

Thurgauerstrasse 80, 8050 Zürich, Switzerland

ANY QUESTIONS?

Don’t hesitate to get in touch at amandine@bemyapp.com

© 2024. BEMYAPP, All rights reserved.

Terms & Conditions | Privacy Policy