Актуальные заказы по AWS

Bioinformatics Analyst

Удаленно
Full-time

We are looking for a skilled and motivated Bioinformatician / Data Scientist to join a dynamic team. In this role, you will have the opportunity to work on a diverse range of projects, utilizing your expertise in bioinformatics and data science to tackle complex scientific challenges. As a key member of our team, you will contribute to the development and application of cutting-edge computational methods and algorithms, enabling our clients to gain valuable insights from their data.

Should be able to be present in person at Cambridge, USA office at least once week.


Responsibilities:

  • Collaborate closely with clients to understand their specific research goals and design tailored bioinformatics and data analysis solutions.
  • Collaborate with interdisciplinary teams of biologists, geneticists, and data scientists to develop and implement computational strategies for analyzing large-scale biological datasets.
  • Develop and implement computational pipelines and workflows for processing and analyzing diverse biological data types, including genomics, transcriptomics, proteomics, and metabolomics.
  • Participation in development, deployment and optimization of bioinformatic workflows pipelines for processing NGS data (single-cell and bulk RNA-seq). Interpret results from these workflows to generate insights.
  • Perform statistical analysis and data mining to identify patterns, correlations, and biomarkers.
  • Apply statistical modeling and machine learning techniques to identify patterns, correlations, and predictive models from large-scale datasets.
  • Stay up-to-date with the latest advancements in bioinformatics and contribute to the continuous improvement of existing methodologies and algorithms.
  • Present findings and results to internal teams and external stakeholders in a clear and concise manner.
  • Deploy and optimize bioinformatic workflows for the integration and analysis of NGS data, including short and long read sequencing data. Interpret results from these workflows to generate insights.
  • Perform quality control checks, align sample data to the reference genome, and produce variants called files (VCFs), and joint-genotyped VCF files.
  • Conduct statistical and genomic analysis, develop custom algorithms.


What we expect:

  • B.S. or M.S. level of relevant education with hands on experience (Computer Science, Bioinformatics etc) in NGS workflow development and analysis.
  • Experience with GxP, Genedata Selector and NGS for Cell Therapy domains is a must.
  • Solid understanding of bioinformatics concepts, algorithms, and tools.
  • Proven experience in analyzing high-throughput genomic, transcriptomic, or proteomic data.
  • Hands-on experience with creating single-cell and bulk RNA-seq data processing pipelines.
  • Proficiency in pipeline development using Nextflow, Cromwell, or other popular framework.
  • Experience with Python programming language. Proficiency in programming languages such as Python or R and experience with relevant bioinformatics software and tools.
  • Solid knowledge of statistical analysis, machine learning, and data mining techniques.
  • English level C1 or higher.


Nice to have:

  • Experience in next-generation sequencing (NGS) data analysis and variant calling.
  • Knowledge of structural bioinformatics and molecular modeling.
  • Familiarity with cloud computing platforms and big data analysis frameworks.
  • Experience with deploying pipelines to AWS.
  • Familiarity with cloud computing platforms and big data analysis frameworks.
  • Strong communication and interpersonal skills with the ability to effectively collaborate with cross-functional teams and communicate complex concepts to non-technical stakeholders.


Lead Data Engineer

Удаленно
Full-time

The project, a platform for creating and publishing content on social media using artificial intelligence tools, is looking for a Lead Data Engineer.


Responsibilities:

- Design, develop, and maintain robust and scalable data pipelines for collecting, processing, and storing data from diverse social media sources and user interactions.

- Design of data warehouse.

- Implement rigorous data quality checks and validation processes to uphold the integrity.

accuracy, and reliability of social media data used by our AI models.

- Automate Extract, Transform, Load (ETL) processes to streamline data ingestion and transformation, reducing manual intervention and enhancing efficiency.

- Continuously monitor and optimize data pipelines to improve speed, reliability, and scalability, ensuring seamless operation of our AI Assistant.

- Collaborate closely with Data Scientists, ML Engineers, and cross-functional teams to understand data requirements and provide the necessary data infrastructure for model development and training.

- Enforce data governance practices, guaranteeing data privacy, security, and compliance with relevant regulations, including GDPR, in the context of social media data.

- Establish performance benchmarks and implement monitoring solutions to identify and address bottlenecks or anomalies in the data pipeline.

- Collaborate with data analysts and business teams to design interactive dashboards that enable data-driven decision-making.

- Develop and support data marts and dashboards that provide real-time insights into social media data.

- Stay updated with emerging data technologies, tools, and frameworks, evaluating their potential to improve data engineering processes.


Qualifications:

- Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field.

- Proven experience in data engineering, focusing on ETL processes, data pipeline development, and data quality assurance.

- Strong proficiency in programming languages such as Python, SQL and knowledge of data engineering libraries and frameworks.

- Experience with cloud-based data storage and processing solutions, such as AWS, Azure, or Google Cloud.

- Familiarity with DataOps principles and Agile methodologies.

- Excellent problem-solving skills and the ability to work collaboratively in a cross-functional team.

- Strong communication skills to convey technical concepts to non-technical stakeholders.

- Knowledge of data governance and data privacy regulations is a plus.

Инженер-программист Java

Full-time
Постоянная работа

Проект

Мы ищем опытного Java-разработчика с опытом работы на PHP или GO в международную финтех-компанию, специализирующуюся на трейдинге, Forex, ETF, криптовалютах и т.д.

Специалист будет отвечать за создание и поддержку наших программных приложений.


Обязанности:

- Работа в составе команды разработчиков и участие во всех этапах жизненного цикла разработки.

- Написание хорошо спроектированного, тестируемого, эффективного кода и тестов.

- Анализировать текущие компоненты и предлагать необходимые обновления.

- Составлять и использовать техническую документацию по изменениям.

- Быть в курсе всех передовых практик, тенденций и развития отрасли.


Требования:

  • Опыт работы в области разработки программного обеспечения от 3 лет.
  • Отличное знание Java SE.
  • Некоторый опыт работы с PHP или Go (оба или хотя бы один из этих языков).
  • Практический опыт работы с Spring: Boot, MVC, Data и т.д.
  • Опыт разработки высоконагруженных систем обработки данных.
  • Опыт работы с SQL (предпочтительно PostgreSQL) и ORM-технологиями (JPA, Hibernate).
  • Понимание работы ESB (предпочтительно Kafka).


Желательно иметь:

  • Опыт работы в финансовых, инвестиционных или торговых компаниях.
  • Хорошее знание структур данных, архитектурных паттернов.
  • Опыт работы с базами данных NoSQL (Redis, MongoDB).
  • Опыт работы с нативными облачными средами (предпочтительно AWS).
  • Опыт работы с SOA и микросервисами.
  • Понимание методологий Agile.


Преимущества:

  • Работа в динамичной и быстро развивающейся международной компании.
  • Релокационный пакет в Черногорию.
  • Использование передовых технологий и современных бизнес-практик, таких как Agile.



AWS DevOps engineer

Seniority: Senior 

We are looking for an AWS DevOps engineer with the following experience:


- AWS Services;

- Postgres and MongoDB (cloud databases);

- Setting up databases for UAT and Production;

- First-hand experience with Containers and AWS Services, including authentication and authorization;

-The engineer must be located in the US.


Expected start date - ASAP.