SCIENCE to SOFTWARE

with real business results

 

The brightest minds within Natural Language Processing and Deep Learning are working together with our Business Experts to make AI technology that benefits you! We make real AI services, for the real world.

 
techpage.png

Pushing the boundaries of what is possible with AI. 

 

RESEARCH

Thanks to our connections to universities and research facilities, we have access to the newest research and concepts, which are at the core of raffle technology

 

DATA SCIENTISTS

Our team of AI experts translates the research and science into software, bridging the gap between theory and reality

DYNAMIC INFRASTRUCTURE

The dynamic infrastructure built on microservices secures fast and reliable deployment of the newest AI, ensuring scalability into the future

Many talk about AI.
We are doing it.

Up until recently, computational processing of language in a manner that maintains context was near to impossible.
Thanks to remarkable breakthroughs in the past year the raffle.ai team is using open source models published by big tech
companies, and combining it with unique in-house skills, to develop a search tool that can understand queries
in natural language and deliver highly accurate answers.

 
Our development pipeline begins with research a product ideation. Led by Professor Ole Winther from DTU Compute, our team of researchers conduct research and leverage existing open source resources to mathematically create models and theories that can be used to achieve advanced language processing tasks.    This research is operationalised and made into reality by Machine Learning experts and data scientists, who test the models on real data and re-train the models based on what datasets they will be used on.    The DevOps team is ensuring everything runs smoothly by developing a dynamic infrastructure, based on microservices available on Azure and Google Cloud. Leveraging a CI/CD pipeline we are able to continuously deploy and improve our models, providing our customers with the best-performing product possible.

Our development pipeline begins with research a product ideation. Led by Professor Ole Winther from DTU Compute, our team of researchers conduct research and leverage existing open source resources to mathematically create models and theories that can be used to achieve advanced language processing tasks.

This research is operationalised and made into reality by Machine Learning experts and data scientists, who test the models on real data and re-train the models based on what datasets they will be used on.

The DevOps team is ensuring everything runs smoothly by developing a dynamic infrastructure, based on microservices available on Azure and Google Cloud. Leveraging a CI/CD pipeline we are able to continuously deploy and improve our models, providing our customers with the best-performing product possible.

 
f2f2f2.png

read more about our thinking