Job Description
Job Description:
We are seeking an experienced and highly skilled Senior Database Developer/Engineer to design, develop, optimize and maintain high performance SQL Server databases for mission-critical logistics applications. The ideal candidate should have experience in query optimization, performance tuning, database replication strategies, and ETL processing. You will be working on data architecture, indexing strategies, replication for reporting environment, and Power BI dataset optimization.
Job Responsibilities:
- Database Design & Development: Design and develop optimized relational databases in Microsoft SQL Server. Create and maintain stored procedures, triggers, views, and functions for high-performance data operations. Work with ETL processes to extract, transform, and load (ETL) data from different sources.
- Query Optimization & Performance Tuning: Analyze and optimize SQL queries using execution plans to improve performance. Implement indexing strategies, partitioning, and query caching to ensure database efficiency. Troubleshoot deadlocks, long-running queries, and performance bottlenecks in production. Monitor and fine-tune database performance using SQL Server Profiler, Extended Events, and Query Store.
- Data Integration & ETL Processing: Develop ETL processes using SSIS, Python, or Power Automate to ingest data from multiple sources. Build and maintain incremental refresh pipelines for Power BI reporting datasets. Implement real-time data replication for reporting environments without impacting production performance. Support incremental data refresh strategies for Power BI datasets.
- Security, Compliance & Data Governance: Implement Role-Based Access Control (RBAC) and Row-Level Security (RLS). Audit and maintain secure data access policies for internal and external reporting.
- Collaboration & Automation: Work closely with Application Developers, BI Teams, and DevOps Engineers. Automate database deployment and maintenance tasks using PowerShell and SQL Agent Jobs.
- Implement CI/CD pipelines for database schema changes.
- Agile Development & Collaboration: Work within an Agile Scrum/Kanban team, actively participating in Sprint Planning – Estimating and prioritizing database-related tasks. Daily Standups – Collaborating with developers and BI teams on database enhancements. Develop and maintain database versioning using CI/CD pipelines. Participate in code reviews and peer testing of database scripts.
Skills:
- Strong understanding of query execution plans, indexing, and performance tuning techniques.
- Experience in designing star schema, snowflake schema, and OLTP/OLAP data models.Experience working in an Agile environment (Scrum/Kanban).
- Experience with database CI/CD pipelines.
- Experience with SSIS, Python (Pandas), Azure Data Factory, or Power Automate.
- Knowledge of incremental data refresh strategies for Power BI datasets.
- Knowledge of big data frameworks – Databricks is a plus.
- Strong knowledge of RBAC, RLS, and auditing policies for secure access control.
- Knowledge of incremental data refresh, Change Data Capture (CDC), and data pipeline optimization.
- Proven experience in handling large transactional datasets and Power BI reporting databases.
- Experience in deploying in Azure.
Job Category: Azure data factory CI/CD pipelines data refresh Databricks ETL Microsoft SQL Server OLTP/OLAP data models Power Automate Power BI Python RBAC RLS Scrum/Kanban SQL SSIS
Job Type: Full Time
Job Location: Maharashtra Pune
Country: India
Experince: 8
