Capital One Senior Data Engineer in Richmond, Virginia
Locations: VA - Richmond, United States of America, Richmond, Virginia
At Capital One, we’re building a leading information-based technology company. Still founder-led by Chairman and Chief Executive Officer Richard Fairbank, Capital One is on a mission to help our customers succeed by bringing ingenuity, simplicity, and humanity to banking. We measure our efforts by the success our customers enjoy and the advocacy they exhibit. We are succeeding because they are succeeding.
Guided by our shared values, we thrive in an environment where collaboration and openness are valued. We believe that innovation is powered by perspective and that teamwork and respect for each other lead to superior results. We elevate each other and obsess about doing the right thing. Our associates serve with humility and a deep respect for their responsibility in helping our customers achieve their goals and realize their dreams. Together, we are on a quest to change banking for good.
Senior Data Engineer
As a Capital One Data Engineer, you'll be part of an Agile team dedicated to breaking the norm and pushing the limits of continuous improvement and innovation. You will participate in detailed technical design, development and implementation of applications using existing and emerging technology platforms. Working within an Agile environment, you will provide input into architectural design decisions, develop code to meet story acceptance criteria, and ensure that the applications we build are always available to our customers. You'll have the opportunity to mentor other engineers and develop your technical knowledge and skills to keep your mind and our business on the cutting edge of technology. At Capital One, we have seas of big data and rivers of fast data.
Who you are:
You yearn to be part of cutting edge, high profile projects and are motivated by delivering world-class solutions on an aggressive schedule
Someone who is not intimidated by challenges; thrives even under pressure; is passionate about their craft; and hyper focused on delivering exceptional results
You love to learn new technologies and mentor junior engineers to raise the bar on your team
It would be awesome if you have a robust portfolio on Github and / or open source contributions you are proud to share
Passionate about intuitive and engaging user interfaces, as well as new/emerging concepts and techniques.
Collaborating as part of a cross-functional Agile team to create and enhance software that enables state of the art, next generation Big Data & Fast Data applications
Building efficient storage for structured and unstructured data
Transform and aggregate the data using data processor technologies
Developing and deploying distributed computing Big Data applications using Open Source frameworks like Apache Spark, Apex, Flink, Nifi and Kafka on AWS Cloud
Utilizing programming languages like Java, Scala, Python and Open Source RDBMS and NoSQL databases and Cloud based data warehousing services such as Redshift
Utilizing Hadoop modules such as YARN & MapReduce, and related Apache projects such as Hive, Hbase, Pig, and Cassandra
Leveraging DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test Driven Development to enable the rapid delivery of working code utilizing tools like Jenkins, Maven, Nexus, Chef, Terraform, Ruby, Git and Docker
Performing unit tests and conducting reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance
At least 3 years of professional work experience in big data platform
At least 1 year of experience in SQL, including working with relational and query authoring.
At least 1 year of experience message queuing, stream processing, and ‘big data’ data stores.
At least 1 year of experience working with unstructured datasets.
Bachelor’s or Master’s degree
2+ years of Agile engineering experience
2+ years of experience with the Hadoop Stack
2+ years of experience with Cloud computing (AWS)
1+ years of experience with any big data visualization tools
2+ years of Python or Java development experience
1+ years of scripting experience
1+ years' experience with Relational Database Systems and SQL (PostgreSQL or Redshift)
1+ years of UNIX/Linux experience
At this time, Capital One will not sponsor a new applicant for employment authorization for this position.