AI Research Engineer (Model Evaluation)

Join Tether and Shape the Future of Digital FinanceAt Tether, we''re pioneering a global financial revolution with innovative solutions that enable seamless integration of reserve-backed tokens across blockchains. Our offerings include the trusted stablecoin USDT, energy solutions for Bitcoin mining, data sharing apps like KEET, and educational initiatives to democratize digital learning.We are a global, remote team looking for passionate individuals to contribute to our cutting-edge projects. If you have excellent English communication skills and a drive to innovate in fintech, Tether is the place for you.About the job:As part of our AI model team, you will develop evaluation frameworks for pre-training, post-training, and inference stages. Your focus will be on designing metrics and assessment strategies to ensure models are responsive, efficient, and reliable across various applications, from resource-limited devices to multi-modal architectures.You should have expertise in advanced model architectures, evaluation practices, and benchmarking. Your work will involve developing novel evaluation strategies, testing, and implementing them to track performance indicators such as accuracy, latency, throughput, and memory footprint. Collaboration with cross-functional teams to share findings and improve deployment strategies will be key.Responsibilities:Develop and deploy evaluation frameworks assessing models at all stages, tracking KPIs like accuracy, latency, and memory usage.Create high-quality datasets and benchmarks to reliably measure model robustness and improvements.Collaborate with product, engineering, and operations teams to align evaluation metrics with business goals, presenting insights via dashboards and reports.Analyze evaluation data to identify bottlenecks and propose optimizations for model performance and resource utilization.Conduct research to refine evaluation methodologies, staying updated on emerging techniques to enhance benchmarking and model reliability.Minimum requirements:A degree in Computer Science or related field; PhD in NLP, Machine Learning, or similar is preferred, with a strong publication record in top conferences.Experience in designing and evaluating AI models across different stages, with proficiency in developing assessment frameworks.Strong programming skills and experience building scalable evaluation pipelines; familiarity with performance metrics like latency, throughput, and memory footprint.Ability to conduct iterative experiments and research to improve evaluation practices.Experience collaborating with diverse teams and translating technical insights into actionable ..... full job details .....
Other jobs of interest...



Perform a fresh search...
-
Create your ideal job search criteria by
completing our quick and simple form and
receive daily job alerts tailored to you!