Amgen

Amgen - Data Engineer - R/Python

Job Location

hyderabad, India

Job Description

Job description : Join Amgens Mission of Serving Patients At Amgen, if you feel like youre part of something bigger, its because you are. Our shared missionto serve patients living with serious illnessesdrives all that we do. Since 1980, weve helped pioneer the world of biotech in our fight against the worlds toughest diseases. With our focus on four therapeutic areas Oncology, Inflammation, General Medicine, and Rare Disease we reach millions of patients each year. As a member of the Amgen team, youll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, youll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Role : Data Engineer What you will do : Lets do this. Lets change the world. We are seeking an experienced Data Engineer to work on a GxP platform supporting the Regulatory Submission system and Clinical Trial Registry system. The role is responsible for designing, building, maintaining, analyzing, and interpreting data to deliver actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and implementing data governance initiatives and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities : - Design, develop, and maintain data solutions for data generation, collection, and processing - Be a key team member that assists in design and development of the data pipeline - Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems - Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions - Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks - Collaborate with multi-functional teams to understand data requirements and design solutions that meet business needs - Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency - Implement data security and privacy measures to protect sensitive data - Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions - Collaborate and communicate effectively with product teams - Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast-paced business needs across geographic regions - Identify and resolve complex data-related challenges - Adhere to standard methodologies for coding, testing, and designing reusable code/component - Explore new tools and technologies that will help to improve ETL platform performance - Participate in sprint planning meetings and provide estimations on technical implementation - Work with data engineers on data quality assessment, data cleansing and data analytics - Share and discuss findings with team members practicing SAFe Agile delivery model - Work as a Data Engineer for a team that uses Cloud and Big Data technologies to design, develop, implement and maintain solutions to support the R&D functional area. - Overall management of the Enterprise Data Lake on AWS environment to ensure that the service delivery is cost effective and business SLAs around uptime, performance and capacity are met. - Proactively work on challenging data integration problems by implementing optimal ETL patterns, frameworks for structured and unstructured data. - Automate and Optimize data pipeline and framework for easier and cost-effective development process. - Advice and support project teams (project managers, architects, business analysts, and developers) on cloud platforms (AWS, Databricks preferred), tools, technology, and methodology related to the design, build scalable, efficient and maintain Data Lake and other Big Data solutions - Experience developing in an Agile development environment, and comfortable with Agile terminology and ceremonies. - Stay up to date with the latest data technologies and trends. What we expect of you : We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications : - Masters degree and 1 to 3 years of Computer Science, IT or related field experience OR - Bachelors degree and 3 to 5 years of Computer Science, IT or related field experience OR - Diploma and 7 to 9 years of Computer Science, IT or related field experience - Have 3-5 years of experience in the Pharmaceutical Industry - Have 3-5 years of experience in Mulesoft development - Hands-on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing - Hands-on experience with various Python/R packages for EDA, feature engineering, and machine learning model training - Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools - Excellent problem-solving skills and the ability to work with large, complex datasets - Solid understanding of data governance frameworks, tools, and standard methodologies. - Knowledge of data protection and pharmaceutical regulations and compliance requirements (e.g., GxP, GDPR, CCPA) - Demonstrated hands-on experience with AWS cloud platform and its technologies like EC2, RDS, S3, Redshift, and IAM roles. - Extensive hands-on experience of working on Data Ingestion methods such as Batch, API and Streaming. - Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA). - Demonstrated experience of performing Data Integrations using Mulesoft. - Solid understanding of ETL, Data Modeling and Data Warehousing concepts. - Ability to work independently with little supervision - Ability to effectively present information to collaborators, and respond to questions to their questions Preferred Qualifications : - Knowledge for clinical data in the pharmaceutical industry - Knowledge of CT.gov and EUCTR.gov portals - Knowledge of the Disclose application from Citeline - Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development - Solid understanding of data modeling, data warehousing, and data integration concepts - Knowledge of Python/R, Databricks, SageMaker, cloud data platforms - Proficiency with Data Orchestration tools like Kubernetes, Docker etc. - Familiarity with SQL/NOSQL database, Vector Database for Large Language Models. Professional Certifications : - SAFe for Teams certification (preferred) - Databricks Certification(Preferred) - AWS Certified Data Engineer(Preferred) Soft Skills : - Excellent critical-thinking and problem-solving skills - Strong communication and collaboration skills - Demonstrated awareness of how to function in a team setting - Demonstrated presentation skills Shift Information : This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements Note : For your candidature to be considered on this job, you need to apply necessarily on the company's redirected page of this job. Please make sure you apply on the redirected page as well. (ref:hirist.tech)

Location: hyderabad, IN

Posted Date: 4/11/2025
View More Amgen Jobs

Contact Information

Contact Human Resources
Amgen

Posted

April 11, 2025
UID: 5139864528

AboutJobs.com does not guarantee the validity or accuracy of the job information posted in this database. It is the job seeker's responsibility to independently review all posting companies, contracts and job offers.