PepsiCo, Inc.
September 3, 2021
Plano, Texas
Job Type
Apply on Website


Job Description : As part of PepsiCo’s digital transformation agenda, we are seeking a HLA Tech Lead to join our expanding SRE team.

As an HLA Tech Lead you will be responsible for the design and development of log analytics Stack of tools that monitor and manage critical applications and infrastructure. You will design and configure ETL data pipelines using Log analytics Common Schema to onboard application & Infrastructure logs and metrics, configure index templates and data life cycle management (ILM) for data retention. HLA Tech Lead will build and define rules to correlate logs & metrics, identify anomalous behavior using AIOps and provide deeper insights for faster recovery.

This role will build and maintain solutions for getting insights on infrastructure and services supporti­­­­­­­­­­­­­ng applications with focus on logs, metrics and application traces that improve Observability.: automation of data collection from common data sources.


  • Responsible for the definition and delivery of activities on the Log monitoring roadmap
  • Collaborate with operations & engineering teams, application developers, management and infrastructure teams to assess near- and long-term Log monitoring needs
  • Implement, maintain, and consult on the observability and monitoring using Log framework that supports the needs of multiple internal stakeholders
  • Develop Log Analytics Center of Excellence, design & implement best practices. Manage, lead, retain, and grow talent. Mentor and coach junior team members
  • Effectively communicate tool capabilities and processes to varying stakeholders
  • Assist with driving monitoring and logs standards to improve the consumer experience of mission-critical applications, services, and business processes with a strong focus on the end-to-end journey
  • Create and maintain optimal data model and data pipeline for enterprise observability solution
  • Architect and design a common data model, standard metadata taxonomy, data pipeline and curation of data for complex enterprise observability solutions covering infrastructure, system, and security logs and metrics
  • Architect and implement measurement criteria in the data pipeline for completeness, timeliness, and accuracy of data
  • Build processes and policies supporting data transformation, data structures, metadata, dependency, and data dictionary across the pipeline
  • Utilize PepsiCo data analytics tools like Grafana and become the data analytics SME to harness available technical asset inventory data to gain insights, improve data quality and increase self-service
  • Designs, implements, and configures Kibana visualizations
  • Consulting experience in application support and log monitoring
  • Solve most challenging data problems and lead the data analytics and log onboarding requirements
  • Strong data analysis skills; ability to independently write scripts/code to parse and analyze complex data
  • Experience with ServiceNow ITOM Event Management module integration
Qualifications / Requirements :
  • Bachelor’s degree or higher in computer science, engineering or related field required; master’s degree preferred


  • 10+ years of experience in design & delivering monitoring solutions at scale using Enterprise Logging framework across multi-functional SRE area
  • Strong experience with Elastic clusters, configuration parameters, indexing, search and query performance tuning, security and administration
  • Strong experience with the full ELK Stack - Elasticsearch, Logstash, Kibana, Beat agents, Machine Learning
  • Experience building and optimizing large data volumes in data pipelines and architecture and data sets using Kafka, ELK and Splunk
  • Experience with Elasticsearch development, integration or support
  • Experience with configuration and monitoring experience with Grafana
  • Develop Elastic alerting solutions using Watcher and Kibana or Grafana. Alerts that will have integrated into ServiceNow for ticketing
  • Develop Machine Learning (ML) job to dynamically monitor and alert on specific metrics and KPIs Follow life cycle processes to move solutions from Dev to QA to Production
  • ServiceNow ITOM Experience is a plus

Broad technical knowledge in one or more of the following:

  • OS and Platform – Azure, PCF, Kubernetes, Linux, Windows, VMware, AWS, Cisco, Infoblox, F5, Palo Alto
  • AIOPS: Moogsoft, Big Panda, UiPath, ServiceNow Integrator HUB, Looms, Robotic Processing, Artificial Intelligence (AI) and Machine Learning (ML) Frameworks
  • Automation/self-heal capability enablement & Observability and AIOPS: Loom Systems, Splunk, DataDog, Grafana, Prometheus, CloudWatch, Jaeger, Zipkin, Kinesis, Apache Airflow, Focus Run, TWS, SAP Solman, Nimsoft, AppDynamics, Splunk
  • API development using 3rd party libraries, REST/API
  • Development expertise in Java, Python, PowerShell and shell scripting
  • Experience with Agile/Scrum methodology

Leadership Skills:

  • Servant leader, comfortable interacting in a consistently collaborative manner with all levels of the organization
  • Practitioner, empowering teams for success, with belief in sharing successes and failures, on a global basis
  • Ability to speak both IT and business, with a focus on representing the art of the possible
  • Experience in working with multiple Managed Service Providers
  • Excellent written and verbal communication with strong interpersonal skills



Related Jobs

October 15, 2021
September 20, 2021
Senior Manager, Data Engineering-R122628   United States of America
September 20, 2021
September 20, 2021

Pin It on Pinterest