Актуальные заказы по Computer Science

Technical Support Associate

Офис
Full-time

An international company with over 25 years of financial markets expertise is looking for a Technical Support Associate to join the team in Montenegro.


In this role, you should have excellent problem-solving skills and excellent interpersonal and communication skills. You're the total match if you are a strong team player and able to collaborate well with others to solve problems and actively incorporate input from various sources and have demonstrated customer focus.


Requirements

  • Customer focusing. Politeness, loyalty in any aspect.
  • Ability to build strong customer relationships and creates processes with customer viewpoint.
  • Strong analytical skills.
  • Able to communicate in a clear manner.
  • Effectively evaluate information / data to make decisions.
  • Anticipates obstacles and develops plans to resolve.
  • Broad understanding of IT Infrastructure and support processes.
  • Strong track record of understanding and interest in current and emerging technologies demonstrated through training, job experience and / or industry activities.
  • Change oriented – actively generate process improvements.
  • Supports and drive change, and confront difficult circumstances in creative ways.
  • Professional IT area education or relevant experience.
  • Windows family user and administrator knowledge or experience.
  • Basic knowledge of networking.
  • PC Desktop and Laptop hardware knowledge at professional level.
  • Basic knowledge of monitoring systems and strategies.
  • Basic knowledge of recovery systems and strategies.
  • Basic knowledge in ITSM and ITIL. Atlassian products knowledge is a plus.
  • Nice to have work experience in programming.
  • Familiarity writing scripts (at least one: Python, JavaScript, Bash).
  • Familiarity with Dynatrace, Zabbix, Git, Jira, would be a good benefit.
  • Bachelor’s degree in Information Systems, Information Technology, Computer Science or Engineering from an accredited university or college.
  • English language knowledge working level both oral and written (intermediate level minimum).


Bioinformatics Analyst

Удаленно
Full-time

We are looking for a skilled and motivated Bioinformatician / Data Scientist to join a dynamic team. In this role, you will have the opportunity to work on a diverse range of projects, utilizing your expertise in bioinformatics and data science to tackle complex scientific challenges. As a key member of our team, you will contribute to the development and application of cutting-edge computational methods and algorithms, enabling our clients to gain valuable insights from their data.

Should be able to be present in person at Cambridge, USA office at least once week.


Responsibilities:

  • Collaborate closely with clients to understand their specific research goals and design tailored bioinformatics and data analysis solutions.
  • Collaborate with interdisciplinary teams of biologists, geneticists, and data scientists to develop and implement computational strategies for analyzing large-scale biological datasets.
  • Develop and implement computational pipelines and workflows for processing and analyzing diverse biological data types, including genomics, transcriptomics, proteomics, and metabolomics.
  • Participation in development, deployment and optimization of bioinformatic workflows pipelines for processing NGS data (single-cell and bulk RNA-seq). Interpret results from these workflows to generate insights.
  • Perform statistical analysis and data mining to identify patterns, correlations, and biomarkers.
  • Apply statistical modeling and machine learning techniques to identify patterns, correlations, and predictive models from large-scale datasets.
  • Stay up-to-date with the latest advancements in bioinformatics and contribute to the continuous improvement of existing methodologies and algorithms.
  • Present findings and results to internal teams and external stakeholders in a clear and concise manner.
  • Deploy and optimize bioinformatic workflows for the integration and analysis of NGS data, including short and long read sequencing data. Interpret results from these workflows to generate insights.
  • Perform quality control checks, align sample data to the reference genome, and produce variants called files (VCFs), and joint-genotyped VCF files.
  • Conduct statistical and genomic analysis, develop custom algorithms.


What we expect:

  • B.S. or M.S. level of relevant education with hands on experience (Computer Science, Bioinformatics etc) in NGS workflow development and analysis.
  • Experience with GxP, Genedata Selector and NGS for Cell Therapy domains is a must.
  • Solid understanding of bioinformatics concepts, algorithms, and tools.
  • Proven experience in analyzing high-throughput genomic, transcriptomic, or proteomic data.
  • Hands-on experience with creating single-cell and bulk RNA-seq data processing pipelines.
  • Proficiency in pipeline development using Nextflow, Cromwell, or other popular framework.
  • Experience with Python programming language. Proficiency in programming languages such as Python or R and experience with relevant bioinformatics software and tools.
  • Solid knowledge of statistical analysis, machine learning, and data mining techniques.
  • English level C1 or higher.


Nice to have:

  • Experience in next-generation sequencing (NGS) data analysis and variant calling.
  • Knowledge of structural bioinformatics and molecular modeling.
  • Familiarity with cloud computing platforms and big data analysis frameworks.
  • Experience with deploying pipelines to AWS.
  • Familiarity with cloud computing platforms and big data analysis frameworks.
  • Strong communication and interpersonal skills with the ability to effectively collaborate with cross-functional teams and communicate complex concepts to non-technical stakeholders.


Lead Data Engineer

Удаленно
Full-time

The project, a platform for creating and publishing content on social media using artificial intelligence tools, is looking for a Lead Data Engineer.


Responsibilities:

- Design, develop, and maintain robust and scalable data pipelines for collecting, processing, and storing data from diverse social media sources and user interactions.

- Design of data warehouse.

- Implement rigorous data quality checks and validation processes to uphold the integrity.

accuracy, and reliability of social media data used by our AI models.

- Automate Extract, Transform, Load (ETL) processes to streamline data ingestion and transformation, reducing manual intervention and enhancing efficiency.

- Continuously monitor and optimize data pipelines to improve speed, reliability, and scalability, ensuring seamless operation of our AI Assistant.

- Collaborate closely with Data Scientists, ML Engineers, and cross-functional teams to understand data requirements and provide the necessary data infrastructure for model development and training.

- Enforce data governance practices, guaranteeing data privacy, security, and compliance with relevant regulations, including GDPR, in the context of social media data.

- Establish performance benchmarks and implement monitoring solutions to identify and address bottlenecks or anomalies in the data pipeline.

- Collaborate with data analysts and business teams to design interactive dashboards that enable data-driven decision-making.

- Develop and support data marts and dashboards that provide real-time insights into social media data.

- Stay updated with emerging data technologies, tools, and frameworks, evaluating their potential to improve data engineering processes.


Qualifications:

- Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field.

- Proven experience in data engineering, focusing on ETL processes, data pipeline development, and data quality assurance.

- Strong proficiency in programming languages such as Python, SQL and knowledge of data engineering libraries and frameworks.

- Experience with cloud-based data storage and processing solutions, such as AWS, Azure, or Google Cloud.

- Familiarity with DataOps principles and Agile methodologies.

- Excellent problem-solving skills and the ability to work collaboratively in a cross-functional team.

- Strong communication skills to convey technical concepts to non-technical stakeholders.

- Knowledge of data governance and data privacy regulations is a plus.

Senior Azure DevOps engineer

Удаленно
Full-time

Project in healthcare industry.


Responsibilities

  • Build the automation processes of our Big Data platforms.
  • Install, deploy, configure, maintain and monitor infrastructure, systems, and management tools of our Big Data platforms.
  • Ensure the highest levels of systems and infrastructure availability for our Big Data platforms.
  • Ensure the proper security standards in terms of infrastructure, systems, and processes.
  • Monitor and test application performance for potential bottlenecks, identify possible solutions and work with developers to implement fixes Provide 2nd level support.
  • Liaise with vendors (Azure and Databricks) and other IT personnel for problem resolution.


General Requirements

  • University degree or similar education in Software Engineering or Computer Science.
  • At least 3 years experience in DevOps and/or in using automation to build infrastructure.
  • Fluent English and good communication skills both in verbal and written form.
  • Ability to work in a globally distributed environment.


Technical Skills

  • Understanding of infrastructure automation concepts and good hands-on knowledge of Ansible.
  • Understanding of containerization concepts and strong knowledge of Docker.
  • Good understanding of Versioning Control systems and strong knowledge of Git.
  • Hands-on experience in Linux administration and in Bash scripting.
  • Experience with Jenkins.
  • Proficiency in writing scripts in Python.
  • Understanding of cloud paradigms (IaaS/SaaS/PaaS).
  • Understanding of networking concepts and cybersecurity best practices.
  • Understanding of agile software development process.
  • Accustomed to leverage tools like Confluence and Jira for knowledge and collaboration management.
  • Hands-on experience with MS Azure.
  • Experience with container orchestration and knowledge of Azure Kubernetes.
  • Familiarity with Big Data technologies: Spark/Hadoop, Azure Databricks, Apache Airflow.
  • Understanding of SQL and NoSQL database types, paradigms, and design patterns.