


I'm a Business Analyst preparing my portfolio, and I had just finished reading about dynamic pricing concepts - the theory, the examples, the math. I understood the what and the why. But I hadn't actually done it myself.
That's when I decided: I need to build a real business case analysis in Excel. Something I could put in my portfolio. Something that would prove I can take a business problem and work through it analytically.
My goal was clear:
Build a dynamic pricing analysis for a CPG product
Use Excel to demonstrate my analytical skills
Keep it beginner-friendly but portfolio-worthy
Show logical thinking, business understanding, and math skills
After discussing requirements, I landed on this scenario:
FreshJuice, a beverage brand, sells 1L juice packs across 30 retail stores at a fixed price of $4.00. The question: Should they implement dynamic pricing (different prices for weekdays vs weekends) to increase revenue?
This felt real. This felt solvable. This felt like something I could actually present to a hiring manager.
I asked for datasets - and I got four CSV files:
Baseline sales data - 30 stores with current sales, locations, competitor info
Customer behavior - Who shops when, and how price-sensitive they are
Demand elasticity reference - The coefficients showing how demand responds to price changes
Historical promotions - Past experiments with pricing changes
But here's the thing: no formulas, no templates, no hand-holding. Just raw data and a problem to solve.
I opened Excel, imported everything, and stared at the screen.

Now what?
Before jumping into calculations, I asked for a thinking framework. Not solutions - just principles to guide my analysis.
That's when I got the mental models that changed how I approached the problem:
Revenue = Price × Quantity
The core tension: raising prices gives you more per unit, but you sell fewer units. Lowering prices means less per unit, but potentially more volume. The art is finding the sweet spot.
Not all customers are the same:
Desperate customers (inelastic) will pay more - they need the product
Flexible customers (elastic) will walk away if price is too high
The key question: Which customer segments can afford a price increase? Which need a discount to buy more?
This is where it got technical. Elasticity coefficients show how "stretchy" demand is:
Low elasticity (like -0.4): Customers barely react to price changes → opportunity to increase price
High elasticity (like -1.2): Customers react strongly → opportunity to attract volume with discounts
Urban stores aren't the same as Rural stores. Weekday shoppers aren't the same as weekend shoppers. Don't apply one-size-fits-all pricing.
I have competitor pricing data. I should use it. Don't price in a vacuum.
I opened the elasticity dataset and saw this:
Store_Type | Time_Period | Price_Elasticity_Coefficient |
|---|---|---|
Urban | Weekday | -0.5 |
Urban | Weekend | -1.2 |
Rural | Weekday | -0.4 |
And I thought: "I don't understand this very well."
This was my first honest moment of confusion.
Here's what I learned:
The elasticity coefficient tells you: "When I change price by X%, how much does demand change?"
The formula:
% Change in Demand = Elasticity × % Change in Price
If I increase price by +10%:
% Change in Demand = -0.5 × (+10%) = -5%
Demand drops by only 5%
Why so little? Office workers on Monday morning need their juice. They're not price-sensitive.
If I increase price by +10%:
% Change in Demand = -1.2 × (+10%) = -12%
Demand drops by 12%!
Why so much? Weekend shoppers are families comparing prices. They're very price-sensitive.
The insight hit me: The size of the elasticity number tells me how easily customers walk away.
I decided to test my understanding with real numbers from the dataset.
Original demand: 75 units
Elasticity: -0.4
What if I increase price by +10%?
My calculation:
New Demand = 75 × (1 + (-0.4) × 0.1)
= 75 × (1 - 0.04)
= 75 × 0.96
= 72 units
Result: Demand drops by only 3 units.
My first mistake: I initially calculated 76.96 and said "demand goes up by 2 units" - wrong direction! Price increase means demand decreases.
After correcting myself: 75 → 72 units. Lost 3 units.
The insight: Rural weekday customers are NOT price-sensitive. They'll keep buying even if I raise prices. This is a pricing opportunity.
Original demand: 295 units (I initially used 185 by mistake, but the method was correct)
Elasticity: -1.2
What if I decrease price by -5%?
My calculation:
New Demand = 295 × (1 + (-1.2) × (-0.05))
= 295 × (1 + 0.06)
= 295 × 1.06
= 312.7 units
Result: Demand increases by 17 units!
The insight: Urban weekend customers are VERY price-sensitive. A small discount attracts many customers. This is a volume opportunity.
After working through these calculations, I started seeing the strategic picture:
Customer Segment | Elasticity | Behavior | Strategy Implication |
|---|---|---|---|
Rural Weekday | Low (-0.4) | Loyal, limited alternatives | Can increase price without losing many customers |
Urban Weekend | High (-1.2) | Price-sensitive, comparing options | Can attract volume with discounts |
The realization: This isn't just about math. It's about understanding human behavior and market dynamics.
After understanding the concept, I got excited and wanted to speed up. I asked: "Can you help me design the Excel workbook with the first scenario worked out?"
And then I caught myself.
Earlier, I had specifically said: "Strictly no solutions, let me give this an honest try."
Was I about to undermine my own learning? Was I about to build a portfolio piece I didn't actually understand?
I pulled back: "You are right buddy. I went overboard. I'll do this by myself."
That moment of discipline mattered. Because when I sit in an interview and someone asks "Walk me through how you built this analysis" - I need to have actually built it.
Before diving into Excel, I asked one more strategic question: "How do I systematically test different pricing scenarios? Should I use 5% jumps? Random changes? Is there a standard?"
Here's the framework I got:
Conservative: ±5%, ±10%
Moderate: ±5%, ±10%, ±15%
Easy to compare, businesspeople think in percentages
Match competitor prices
Slight premium (competitor + 5%)
Slight discount (competitor - 5%)
$3.99, $3.79, $4.29, etc.
But for analysis, round numbers work fine
Based on elasticity logic, I should test:
Store Type | Weekday Strategy | Weekend Strategy | Rationale |
|---|---|---|---|
Urban | +5% | -5% | Loyal weekday customers, price-sensitive weekenders |
Suburban | 0% | -5% | Middle ground weekdays, somewhat sensitive weekends |
Rural | +10% | 0% | Very loyal weekdays, moderately loyal weekends |
The plan: Build this scenario first, calculate revenue impact, then iterate.
How to apply elasticity coefficients to calculate demand changes
How to structure scenario analysis in Excel
How to think about revenue trade-offs (price vs. volume)
Customer segmentation matters - one strategy doesn't fit all
Data should guide strategy, not just intuition
Competitor context is critical
Small percentage gains at scale can mean big dollars
Don't rush to solutions - understand the problem first
Theory is useless without application
Making mistakes (like my first calculation error) is part of learning
Discipline to do the work yourself > taking shortcuts
Now I'm ready to actually build this in Excel. My plan:
Set up columns for demand calculations with elasticity
Create formulas that reference elasticity coefficients
Calculate new demand and revenue for each store
Apply my hypothesis (increase rural weekday, decrease urban weekend, etc.)
Calculate total revenue vs. baseline
See if it works
If revenue improves, can I optimize further?
If it doesn't, what assumptions were wrong?
Test multiple scenarios
Create a summary dashboard
Visualize revenue changes
Write clear recommendations
Include risks and implementation considerations
I'm at the beginning of actually building this analysis. I understand the concepts. I've worked through the math manually. I know the framework.
But I haven't built the Excel workbook yet. I haven't tested the scenarios. I don't know if my hypothesis will work.
And that's okay.
Because this is real learning. Not copying formulas from someone else. Not presenting work I didn't understand. Not faking it.
This is the messy, uncertain, trial-and-error process of actually solving a business problem.
And when I finish this analysis - when I've tested scenarios, made decisions, documented my logic - I'll have something real to show. Something I can defend. Something I actually understand.
If you're trying to build portfolio projects like I am, here's what I'd tell you:
Start with real problems, not templates - Don't just download a Kaggle dataset and run models. Pick a business problem you care about.
Ask for frameworks, not solutions - Getting principles and mental models is more valuable than getting answers.
Work through examples manually first - Before building complex Excel models, do the math by hand. Understand the logic.
Embrace confusion as a signal - When I didn't understand elasticity, I asked. That was the breakthrough moment.
Resist shortcuts - When I almost asked for the complete workbook, I would've robbed myself of learning.
Document your thinking - This blog is as much for me as for others. Writing this down solidifies my understanding.
A week ago, I knew dynamic pricing existed but had never analyzed it myself.
Today, I understand:
How elasticity works
Why different customers need different strategies
How to calculate demand changes from price changes
How to structure a pricing analysis
Tomorrow, I'll build the Excel analysis that proves I can apply these concepts.
This is what portfolio building should feel like: challenging, uncertain, but ultimately empowering.
Because when I walk into an interview and say "I built a dynamic pricing analysis for a CPG brand" - I won't be bluffing. I'll know exactly what I did and why.
And that confidence? That's worth more than any pre-built template.
Status: Analysis in progress. Concepts understood. Excel workbook pending. Learning happening.
Next update: When I've completed the analysis and can share results, insights, and what I learned from actually building it.
This blog documents my journey from theory to application - the messy middle where real learning happens.

Tired of boring images? Meet the 'Jai & Veeru' of AI! See how combining Claude and Nano Banana Pro creates mind-blowing results for comics, diagrams, and more.

What I thought would be a simple RBAC implementation turned into a comprehensive lesson in Kubernetes deployment. Part 1: Fixing three critical deployment errors. Part 2: Implementing namespace-scoped RBAC security. Real terminal outputs and lessons learned included

This blog walks you through how Databricks Connect completely transforms PySpark development workflow by letting us run Databricks-backed Spark code directly from your local IDE. From setup to debugging to best practices this Blog covers it all.

This blog unpacks how brands like Amazon and Domino’s decide who gets which coupon and why. Learn how simple RFM metrics turn raw purchase data into smart, personalised loyalty offers.

Learn how Snowflake's Query Acceleration Service provides temporary compute bursts for heavy queries without upsizing. Per-second billing, automatic scaling.

A simple ETL job broke into a 5-hour Kubernetes DNS nightmare. This blog walks through the symptoms, the chase, and the surprisingly simple fix.

A data engineer started a large cluster for a short task and couldn’t stop it due to limited permissions, leaving it idle and causing unnecessary cloud costs. This highlights the need for proper access control and auto-termination.

Say goodbye to deployment headaches. Learn how Databricks Asset Bundles keep your pipelines consistent, reproducible, and stress-free—with real-world examples and practical tips for data engineers.

Tracking sensitive data across Snowflake gets overwhelming fast. Learn how object tagging solved my data governance challenges with automated masking, instant PII discovery, and effortless scaling. From manual spreadsheets to systematic control. A practical guide for data professionals.

My first hand experience learning the essential concepts of Dynamic pricing

Running data quality checks on retail sales distribution data

This blog explores my experience with cleaning datasets during the process of performing EDA for analyzing whether geographical attributes impact sales of beverages

Snowflake recommends 100–250 MB files for optimal loading, but why? What happens when you load one large file versus splitting it into smaller chunks? I tested this with real data, and the results were surprising. Click to discover how this simple change can drastically improve loading performance.

Master the bronze layer foundation of medallion architecture with COPY INTO - the command that handles incremental ingestion and schema evolution automatically. No more duplicate data, no more broken pipelines when new columns arrive. Your complete guide to production-ready raw data ingestion

Learn Git and GitHub step by step with this complete guide. From Git basics to branching, merging, push, pull, and resolving merge conflicts—this tutorial helps beginners and developers collaborate like pros.

Discover how data management, governance, and security work together—just like your favorite food delivery app. Learn why these three pillars turn raw data into trusted insights, ensuring trust, compliance, and business growth.

Beginner’s journey in AWS Data Engineering—building a retail data pipeline with S3, Glue, and Athena. Key lessons on permissions, data lakes, and data quality. A hands-on guide for tackling real-world retail datasets.

A simple request to automate Google feedback forms turned into a technical adventure. From API roadblocks to a smart Google Apps Script pivot, discover how we built a seamless system that cut form creation time from 20 minutes to just 2.

Step-by-step journey of setting up end-to-end AKS monitoring with dashboards, alerts, workbooks, and real-world validations on Azure Kubernetes Service.

My learning experience tracing how an app works when browser is refreshed

Demonstrates the power of AI assisted development to build an end-to-end application grounds up

A hands-on learning journey of building a login and sign-up system from scratch using React, Node.js, Express, and PostgreSQL. Covers real-world challenges, backend integration, password security, and key full-stack development lessons for beginners.

This is the first in a five-part series detailing my experience implementing advanced data engineering solutions with Databricks on Google Cloud Platform. The series covers schema evolution, incremental loading, and orchestration of a robust ELT pipeline.

Discover the 7 major stages of the data engineering lifecycle, from data collection to storage and analysis. Learn the key processes, tools, and best practices that ensure a seamless and efficient data flow, supporting scalable and reliable data systems.

This blog is troubleshooting adventure which navigates networking quirks, uncovers why cluster couldn’t reach PyPI, and find the real fix—without starting from scratch.

Explore query scanning can be optimized from 9.78 MB down to just 3.95 MB using table partitioning. And how to use partitioning, how to decide the right strategy, and the impact it can have on performance and costs.

Dive deeper into query design, optimization techniques, and practical takeaways for BigQuery users.

Wondering when to use a stored procedure vs. a function in SQL? This blog simplifies the differences and helps you choose the right tool for efficient database management and optimized queries.

Discover how BigQuery Omni and BigLake break down data silos, enabling seamless multi-cloud analytics and cost-efficient insights without data movement.

In this article we'll build a motivation towards learning computer vision by solving a real world problem by hand along with assistance with chatGPT

This blog explains how Apache Airflow orchestrates tasks like a conductor leading an orchestra, ensuring smooth and efficient workflow management. Using a fun Romeo and Juliet analogy, it shows how Airflow handles timing, dependencies, and errors.

The blog underscores how snapshots and Point-in-Time Restore (PITR) are essential for data protection, offering a universal, cost-effective solution with applications in disaster recovery, testing, and compliance.

The blog contains the journey of ChatGPT, and what are the limitations of ChatGPT, due to which Langchain came into the picture to overcome the limitations and help us to create applications that can solve our real-time queries

This blog simplifies the complex world of data management by exploring two pivotal concepts: Data Lakes and Data Warehouses.

demystifying the concepts of IaaS, PaaS, and SaaS with Microsoft Azure examples

Discover how Azure Data Factory serves as the ultimate tool for data professionals, simplifying and automating data processes

Revolutionizing e-commerce with Azure Cosmos DB, enhancing data management, personalizing recommendations, real-time responsiveness, and gaining valuable insights.

Highlights the benefits and applications of various NoSQL database types, illustrating how they have revolutionized data management for modern businesses.

This blog delves into the capabilities of Calendar Events Automation using App Script.

Dive into the fundamental concepts and phases of ETL, learning how to extract valuable data, transform it into actionable insights, and load it seamlessly into your systems.

An easy to follow guide prepared based on our experience with upskilling thousands of learners in Data Literacy

Teaching a Robot to Recognize Pastries with Neural Networks and artificial intelligence (AI)

Streamlining Storage Management for E-commerce Business by exploring Flat vs. Hierarchical Systems

Figuring out how Cloud help reduce the Total Cost of Ownership of the IT infrastructure

Understand the circumstances which force organizations to start thinking about migration their business to cloud