Hello all! I am Joseph Ziegler, a Lead Site Reliability Engineer at Sporttrade, based in Camden, New Jersey. I am working permanently remote from Roseburg, Oregon.
I'm an experienced IT professional with over ten years of working in the technical industry including experience in support, Linux administration, server automation, high availability, architecting and deploying large cross-region server environments for 30,000+ employees, CI/CD, configuration management, and TCP/IP networking. Experience supporting a large corporate-level infrastructure at a very fast-paced environment. I am able to support a large user-base, mostly under pressure, including campus-wide outages, working on several projects at once pertaining to services in the SRE and DevOps stack. Proven ability to drive projects on my own and having a positive impact on a company's infrastructure.
I am very interested in new technology, TCP/IP networking, system administration, automation, and Infrastructure as Code (IaC). I am ready to learn new things and to make a difference in the world with today's computer systems. My ambition is to utilize my diverse background and technical experience to make a positive impact on my future career and the world.
My passion is technology. Working in the technical industry is my dream and I do whatever I can to learn every single day and make a huge impact on the corporate infrastructure.
• Migrated core SRE-managed applications and services to a containerized environment in preparation for a large cloud native initiative.
• Converted standalone and containerized services to Kubernetes (EKS/GKE), reducing infrastructure costs by 25%.
• Implemented centralized storage solution for Prometheus hosted in Kubernetes (Thanos).
• Migrated services to use Terraform (IaC), reducing server/managed-service deployment times from hours to seconds.
• Owner of Bluecat DNS services within Uber’s corporate infrastructure.
• Migrated Uber’s entire corporate DNS infrastructure from VMWare to Google Cloud.
Designed and implemented critical service migrations from AWS to Google Cloud, ensuring little to no downtime.
• Re-designed CI/CD pipelines for teams using GitHub workflows, ensuring proper code testing, analysis, and seamless deployment to infrastructure.
• Wrote Python and Bash scripts to automate team tooling
• Performed all duties of the Systems Engineer I and II role.
• Promoted to Senior Applications Developer.
• Owner of Corp Linux server authentication.
• Owner of Puppet / configuration management in Uber's Corp infrastructure.
• Owner of Uber's Corp Puppet Enterprise configuration management solution, fine-tuning and scaling Puppet infrastructure consisting of a
Puppet master and multiple compile masters.
• Architected and implement distributed enterprise applications serving 4,000+ engineers.
• Member of the Change Advisory Board (CAB) representing the SRE team.
• Deployed high-impacting, robust, and highly-available services consisting of multi-region active-active clusters to ensure the lowest
latency as possible for end-users.
• Researched and implement new services into our environment, eliminating time-consuming Puppet code modifications.
• Mentored employees on other teams in an effort to provide potential career growth and better understanding of the managed corporate services.
• Rewrote Puppet classes to scale with the growth of the server environment and to eliminate any potential manual configuration.
• Performed all duties of Systems Engineer I.
Infrastructure Engineer I on the Tech Services, Corp Site Reliability Engineering Team. Our mission is to ensure a durable approach to engineering for all corporate systems through proactive design, automation, and metrics.
• Promoted to Systems Engineer II
• Solved problems relating to mission critical services and build automation to prevent problem recurrence.
• Configured and deploy applications in OneLogin.
• Deployed new applications created by software engineers into our corporate environment by configuring dedicated systems with Puppet Enterprise.
• Configured new and current Haproxy servers for load balancing and high availability for web nodes.
• Determined ways to improve our current Puppet code for future infrastructure growth and rewrite the Puppet code as necessary.
• Wrote custom Sensu checks and metrics for monitoring and alerting purposes.
• Administered internal systems including Puppet Enterprise, RabbitMQ, Sensu, Elasticsearch, Logstash, Kibana (ELK stack), Grafana/Graphite, Redis, Kafka, Stash/Bitbucket, Bamboo, OneLogin, Jira, and Confluence (Atlassian Suite).
• Managed 1000+ nodes spread across Uber's global corporate infrastructure using Puppet, which includes Ubuntu, Debian, CentOS, and Windows operating systems.
• Quickly managed systems using Fabric.
• Lead and coordinated the implementation of new services into the Corp infrastructure.
Provided onsite, phone, email, and chat support for end user systems and applications.
• Promoted to Infrastructure Engineer I on the Corp Site Reliability Engineering Team.
• Wrote weekly updates for the entire global service desk team which includes all important changes, updates, and critical information regarding services we supported.
• Set all time record with another colleague for the most amount of tickets completed in one workday.
• Began after work program to teach service desk team desired technologies for career growth.
• IT onboarding lead for all of Uber’s new fulltime employees weekly.
• Created and designed projects, which affected all service desk teams globally to help streamline our ability to resolve tickets more efficiently and help scale with the company growth.
• Wrote bash scripts to automate tedious processes for LDAP additions, deletions, and modifications.
• Wrote tools using Google's App Script to make processes more efficient.
• Won the Most Valuable Person award for 2015 on the global tier 2 team
Encountered all major desktop and mobile OS’s (Windows, Mac, Linux, ChromeOS, Android, Blackberry, etc) and a sophisticated userbase with technically challenging support cases. I supported Google employees directly and remotely, including support with Google's internal tools and applications. In this role, I also took on small to medium-sized IT projects, which impacted all Google campuses globally.
• Supported all Googlers in a front-line IT support role through a helpdesk and user-submitted tickets.
• Took on medium-sized IT projects, spending 8 to 10 hours per week on specific tasks, which impacted all Google campuses globally.
• Visited Google offices globally to support smaller offices requiring quarterly visits or larger offices when short staffed.
• Worked with other teams such as Network Operations, Windows Service Team, Linux Service Team, and many others to diagnose, troubleshoot, and repair issues causing downtime for employees or resulting in site-wide outages.
• Led a global initiative project, impacting all Google offices globally relating to Google’s conference rooms.
Managed the entire IT infrastructure including servers, workstations, networking routers and switches, and PBX phone system. I dealt with many operating systems including Windows (XP, Vista, 7 Server 2003, Server 2008), Mac (OS X Lion), and Linux (Ubuntu Desktop, Ubuntu Server, Mint, and Fedora).
• Provided IT support on-site and remotely for 50+ employees at 4 locations.
• Provided emergency on-call support when needed.
• Managed and maintainted all workstations, networking equipment, and servers.
• Assisted in the planning, design, documentation, and implementation of various systems including servers, desktops, laptops, mobile phones and software applications.
• Designed and maintained a linux-based backup system, backing up all server related systems company-wide.
Site: www.devopsbookmarks.org
FCC Technician Licence for Amateur Radio
Site: www.dsicommunity.org