Back to all Articles
Karol Andruszków
Karol is a serial entrepreneur who has successfully founded 4 startup companies. With over 11 years of experience in Banking, Financial, IT and eCommerce sector, Karol has provided expert advice to more than 500 companies across 15 countries, including Poland, the USA, the UK, and Portugal.
How to Build Second-Hand Marketplace Platform Like Vinted? Part I: Technology
Updated:
Wed, Jan 21
Reading time: 17 minutes
We have built marketplace platforms for more than fifteen years at Ulan Software. During this time, we have seen the same request many times. Founders and CTOs ask about building consumer-to-consumer (C2C) platforms. Most ideas frame around second-hand goods and the circular economy. The interest is real. Demand exists. Still, many teams underestimate how hard it is to scale such platforms.
This is why we chose Vinted as a case study. Vinted is not a theoretical success story. The company started in 2008 as a simple website, created by Milda Mitkutė and Justas Janauskas. Over time, the platform grew into the largest second-hand fashion marketplace in Europe.
By October 2024, Vinted reached a €5 billion valuation. In 2024 alone, the company reported €813.4 million in revenue and €76.7 million in net profit. These numbers matter, because they prove that peer-to-peer marketplaces can reach scale, stability, and profitability.
This article starts a three-part series about Vinted. In Part I, we focus on technology and architecture choices. These decisions shaped the platform from the early days. We do not aim to celebrate outcomes. Instead, we examine choices, limits, and long-term effects.
Throughout the series, we also include comments from our CTO, Wojtek. His experience helps connect Vinted’s decisions with real challenges faced by teams building similar platforms today.
Buyers and Sellers Flow in a C2C Marketplace
The First Scaling Challenges
Domain-Driven Design and Microservices
This process took time. By Vinted’s own account, it took at least two years to map the system. As a result, engineers identified roughly 300 business domains inside what had previously been treated as one application.
These domains covered areas such as:
And many more beyond these examples.
This changed how teams worked. Ownership became clear. Each domain had a responsible team. Responsibilities that had been mixed together became visible. Development speed improved without changing any infrastructure.
Vinted staff engineer summarized:
This transition was gradual. There was no single rewrite. The monolith shrank as services grew around it.
This shows how marketplace architectures typically evolve at scale, where domain clarity matters more than technology speed.
By 2023, Vinted engineers had written more than 600,000 lines of Go code. That represented about 23% of the backend codebase. Ruby stayed the default choice. Go became the tool for workloads where Ruby would struggle. In some cases, Rust was used for very specific, performance-critical tasks.
....
Read more about microservices architecture: How to Create Microservices Architecture - Our Experience
Moving to Event-Driven Architecture
As Vinted expanded across Europe, the United States, and Canada, one problem became impossible to ignore. How to serve requests from many regions with consistent data while maintaining fast response times?
The team made a clear decision early on. They would not shard core write models across regions. Instead, they kept a single source of truth for writes and pushed read data closer to users.
As Vinted engineer explained in Vinted engineer blog:
Running the Platform Across Regions
Scaling across continents forced Vinted to make a clear architectural call. The team chose centralized writes and distributed reads. This decision avoided the complexity of multi-master systems while keeping the platform fast for users worldwide.
All write operations go to a single primary region. Listings, orders, and payments are created there. Other regions serve read-only data through replicated projections and caches. Vinted wrote: “all writes happen in the primary site and read-only projections are replicated around the world.”
This model sacrifices immediate global consistency for simplicity and reliability. A user in one region may see an update a few seconds later than a user in another region. Vinted accepted this delay. The alternative would require conflict resolution and distributed consensus, which adds risk without clear user value.
To support fast local reads, the platform relies on data projections. These are read-optimized views built from event streams. Features such as search, feeds, and listing pages query these projections instead of core databases. Each projection can use its own schema and indexes, tuned for the region it serves.
Events keep projections in sync. If a region falls behind due to network issues, it can replay events and recover automatically.
In practice, this architecture delivers low-latency browsing and predictable performance at scale. Core transactions remain consistent.
Changes in Search, Data, and Infrastructure
Search became one of the first subsystems to hit hard limits. Early on, the platform moved from Sphinx to Elasticsearch around 2014–2015. At the time, this was a solid step forward. Elasticsearch supported full-text queries, filters, and faceted navigation at scale. For years, it handled item discovery across tens and then hundreds of millions of listings.
By 2023, the numbers changed the equation. The catalog grew toward one billion active listings. Search traffic reached roughly 20,000 queries per second. At peak, p99 latency needed to stay under 150 milliseconds. The Elasticsearch setup had expanded to six clusters with about twenty large nodes each. Every node ran on machines with 128 CPU cores and 512 GB of RAM. Managing shards, hot nodes, and rebalancing became a constant operational burden.
At the same time, search requirements evolved. The team wanted vector search, deeper ranking, and better support for recommendation use cases. Elasticsearch could do parts of this, but not efficiently at that scale. The decision was made to replace it.
By late 2023, item search traffic fully migrated to Vespa. The impact was measurable. Infrastructure shrank to a single cluster of about sixty content nodes. Query latency improved by roughly 2.5×. Indexing latency dropped from about 300 seconds to roughly 5 seconds at p99. Updates now flow through Apache Flink into the search index almost in real time.
This change also unlocked better relevance. Vespa allowed ranking over hundreds of thousands of candidates per query and supported vector-based similarity alongside keyword search. The same system now powers search, feeds, and recommendation use cases. Search stopped being just retrieval and became a core ranking engine.
The team adopted Vitess to move from vertical to horizontal scaling. Vitess added sharding, routing, and connection pooling on top of MySQL, while keeping the relational model intact. This reduced operational risk and allowed resharding without downtime. Vitess was even added to CI environments, so tests ran against a sharded setup instead of a single database.
Search indexes, projections, and analytics update continuously instead of waiting for scheduled jobs. This approach allows faster feedback loops and fresher user experiences.
The Current Tech Stack (2026)
Key Takeaways from Vinted’s Journey
Vinted’s 18 year path, from a small website to a €5 billion platform, shows lessons that apply to almost every C2C marketplace scenario. To sum up:
The core lesson is simple. Technology matters, but sequencing matters more.
At Ulan Software, these lessons come from years of building and scaling marketplace platforms that face similar technical and organizational challenges.
This article closes Part I of our Vinted case study, focused on technology and architecture. In the next parts we examine Vinted’s business model and strategic decisions.
This is why we chose Vinted as a case study. Vinted is not a theoretical success story. The company started in 2008 as a simple website, created by Milda Mitkutė and Justas Janauskas. Over time, the platform grew into the largest second-hand fashion marketplace in Europe.
By October 2024, Vinted reached a €5 billion valuation. In 2024 alone, the company reported €813.4 million in revenue and €76.7 million in net profit. These numbers matter, because they prove that peer-to-peer marketplaces can reach scale, stability, and profitability.
This article starts a three-part series about Vinted. In Part I, we focus on technology and architecture choices. These decisions shaped the platform from the early days. We do not aim to celebrate outcomes. Instead, we examine choices, limits, and long-term effects.
Throughout the series, we also include comments from our CTO, Wojtek. His experience helps connect Vinted’s decisions with real challenges faced by teams building similar platforms today.
Buyers and Sellers Flow in a C2C Marketplace
At the core, every second-hand marketplace supports two main paths. One path serves sellers. The other serves buyers. Both must work well on their own. Furthermore, both must work together in parralel.
This structure is typical for C2C platforms, where users act as both supply and demand. We previously explained what defines a peer-to-peer marketplace model, including how it differs from classic ecommerce setups.
First we will describe the buyer and seller journeys. In the next sections we will explain how Vinted implemented and scaled each part over time.
Once a buyer finds an item, the next step is evaluation. This includes viewing photos, reading the description, checking the price, and reviewing seller information. Many buyers ask questions before purchasing, so messaging must feel instant and reliable. Delays here often lead to abandoned purchases.
The purchase itself introduces complexity. The platform handles payment, not the users. Vinted uses integrated payments with buyer protection. Funds are held in escrow until delivery is confirmed. This model reduces risk on both sides and increases trust in the platform.
After payment, logistics take over. The system generates shipping instructions or labels and connects with carriers. Buyers track delivery status inside the app. Once the item arrives, the buyer confirms receipt. This confirmation triggers the payout to the seller. Ratings and reviews usually follow, feeding back into reputation systems.
Once published, the listing must appear in search results quickly. Search indexing is critical here. Delays reduce visibility and lower the chance of sale. At scale, this step often pushes teams toward asynchronous processing and eventual consistency.
When an item sells, the platform notifies the seller and provides a prepaid shipping label. The seller ships the item and marks it as sent. Throughout the process, notifications keep both sides informed. These include messages, emails, and push alerts.
If problems occur, such as delays or damaged items, the platform steps in. Disputes, refunds, and moderation rely on clear workflows and internal tools. Trust and safety systems support this layer and protect the marketplace as a whole.
The core system was a Ruby on Rails monolith backed by MySQL. User accounts, listings, orders, and payments lived in one codebase and deployed as one unit. At that stage, this was not a compromise. It was the right tool for the job. A single application meant fast development, simple debugging, and predictable deployments. For an early-stage platform, these benefits outweighed any future scaling concerns.
This monolith handled more load than many teams expect today. By 2015, the application ran a test suite of roughly 7,000 tests (source: Vinted engineering blog). Deployments happened around 300 times per day. Even with one main application, the team maintained high delivery speed. This was possible because of strong automation and discipline, not because the architecture was complex.
Ruby enabled rapid iteration, and the Rails ecosystem covered most needs out of the box. Even years later, Vinted engineers reported that some Ruby-based services could handle up to 50,000 requests per second with proper caching and native extensions. The language itself did not block growth. Design choices did.
This setup solved several problems at once. Databases stayed smaller. Traffic was isolated by country. Legal and operational differences were easier to manage. In practice, this worked like geographic sharding before the team had to implement real database sharding. It delayed many hard problems by years.
This stack was common at the time, but Vinted pushed it hard. Requests flowed from Nginx to Ruby workers with aggressive caching at multiple layers. Local in-process caches reduced repeated work inside each worker. Memcached handled shared cache misses. This reduced database pressure and avoided traffic spikes when cache entries expired.
This structure is typical for C2C platforms, where users act as both supply and demand. We previously explained what defines a peer-to-peer marketplace model, including how it differs from classic ecommerce setups.
First we will describe the buyer and seller journeys. In the next sections we will explain how Vinted implemented and scaled each part over time.
Buyer Flow
A buyer starts with discovery. This usually means browsing categories or searching by keyword. The platform must support filters such as size, brand, price, and condition. At scale, search speed matters as much as relevance. Slow or inaccurate results reduce engagement fast.Once a buyer finds an item, the next step is evaluation. This includes viewing photos, reading the description, checking the price, and reviewing seller information. Many buyers ask questions before purchasing, so messaging must feel instant and reliable. Delays here often lead to abandoned purchases.
The purchase itself introduces complexity. The platform handles payment, not the users. Vinted uses integrated payments with buyer protection. Funds are held in escrow until delivery is confirmed. This model reduces risk on both sides and increases trust in the platform.
After payment, logistics take over. The system generates shipping instructions or labels and connects with carriers. Buyers track delivery status inside the app. Once the item arrives, the buyer confirms receipt. This confirmation triggers the payout to the seller. Ratings and reviews usually follow, feeding back into reputation systems.
Seller Flow
The seller journey starts with listing an item. A seller uploads photos, writes a description, sets a price, and selects shipping options. From a technical view, this step touches several systems at once. Images need storage and processing. Item data needs validation and persistence. Pricing and shipping rules must be enforced.Once published, the listing must appear in search results quickly. Search indexing is critical here. Delays reduce visibility and lower the chance of sale. At scale, this step often pushes teams toward asynchronous processing and eventual consistency.
When an item sells, the platform notifies the seller and provides a prepaid shipping label. The seller ships the item and marks it as sent. Throughout the process, notifications keep both sides informed. These include messages, emails, and push alerts.
If problems occur, such as delays or damaged items, the platform steps in. Disputes, refunds, and moderation rely on clear workflows and internal tools. Trust and safety systems support this layer and protect the marketplace as a whole.
Early Architecture and Tech Stack
Monolith Architecture Era (2008–2015)
In its first years, Vinted followed a path that many strong consumer platforms once took. The team built a single application and focused on speed, not structure. This choice carried the platform from its first users to more than one million active users over several years.The core system was a Ruby on Rails monolith backed by MySQL. User accounts, listings, orders, and payments lived in one codebase and deployed as one unit. At that stage, this was not a compromise. It was the right tool for the job. A single application meant fast development, simple debugging, and predictable deployments. For an early-stage platform, these benefits outweighed any future scaling concerns.
This monolith handled more load than many teams expect today. By 2015, the application ran a test suite of roughly 7,000 tests (source: Vinted engineering blog). Deployments happened around 300 times per day. Even with one main application, the team maintained high delivery speed. This was possible because of strong automation and discipline, not because the architecture was complex.
Why Ruby Made Sense
The choice of Ruby was shaped by timing. Vinted started in 2008, when Ruby on Rails was at its peak. At that moment, choosing Ruby was as common as choosing Node.js would be today.Ruby enabled rapid iteration, and the Rails ecosystem covered most needs out of the box. Even years later, Vinted engineers reported that some Ruby-based services could handle up to 50,000 requests per second with proper caching and native extensions. The language itself did not block growth. Design choices did.
Running the Platform in Different Markets
From the start, Vinted operated across countries. Instead of one global instance, the platform ran country-specific portals. Each major market had its own application instance and database. German users lived on one stack. Lithuanian users on another. This approach created clear data boundaries.This setup solved several problems at once. Databases stayed smaller. Traffic was isolated by country. Legal and operational differences were easier to manage. In practice, this worked like geographic sharding before the team had to implement real database sharding. It delayed many hard problems by years.
Infrastructure and Deployment Model
The platform ran on bare-metal servers in data centers, not public cloud services. By 2015, Vinted operated across three data centers in Europe and the United States. Nginx handled SSL termination and load balancing. Unicorn served the Rails application.This stack was common at the time, but Vinted pushed it hard. Requests flowed from Nginx to Ruby workers with aggressive caching at multiple layers. Local in-process caches reduced repeated work inside each worker. Memcached handled shared cache misses. This reduced database pressure and avoided traffic spikes when cache entries expired.
Selective Use of Microservices
Although the system was mainly monolithic, the team did not treat the monolith as sacred. Certain functions moved out early. According to a 2015 architecture overview:"Several microservices are built around the core Rails app, all with a clear purpose, like sending iOS push notifications, storing and serving brand names, storing and serving hashtags."
Nerijus Bendžiūnas and Tomas Varaneckas
These services handled narrow tasks and stayed small. They reduced load on the core application without forcing a full architectural split. This hybrid approach kept complexity under control while solving real bottlenecks.
Search also avoided the primary database. Filtered catalog pages queried a search index instead of MySQL. Initially, this index was powered by Sphinx. In 2014, the team migrated to Elasticsearch to support richer queries and better scaling. This allowed buyers to filter by size, color, and brand without slowing down the core system.
The team avoided early over-engineering. They invested in testing, automation, and caching instead of chasing trends. This phase shows a simple truth: a well-run monolith can carry a marketplace far, if teams understand its limits and prepare to change when those limits appear.
In the next section, we examine when those limits surfaced and why Vinted had to move beyond this model.
Images and Search as First-Class Concerns
Images were processed outside the request path. Uploaded photos went through a separate service. Processed images were stored in GlusterFS and cached after the third request. Early access usually came from the uploader, so caching too early made little sense.Search also avoided the primary database. Filtered catalog pages queried a search index instead of MySQL. Initially, this index was powered by Sphinx. In 2014, the team migrated to Elasticsearch to support richer queries and better scaling. This allowed buyers to filter by size, color, and brand without slowing down the core system.
Key Lessons from This Period
The early architecture served Vinted well for seven years. The system scaled from thousands to millions of users without a full rewrite. The key was not technology choice. It was restraint.The team avoided early over-engineering. They invested in testing, automation, and caching instead of chasing trends. This phase shows a simple truth: a well-run monolith can carry a marketplace far, if teams understand its limits and prepare to change when those limits appear.
In the next section, we examine when those limits surfaced and why Vinted had to move beyond this model.
The First Scaling Challenges
By the late 2010s, Vinted entered a very different phase of growth. Traffic increased fast. International expansion accelerated. What once felt simple and controlled started to feel tight.
Between 2018 and 2020, backend traffic reached peak levels of roughly 150,000 requests per second (source: Vinted engineering blog). For a Rails-based monolith, this was extreme scale. The system still worked, but the cost of keeping it stable kept rising. The original design had optimized for speed of development. Now the same design slowed teams down.
As a result, devs struggled to change one area without touching another. As the CTO later described, feature logic became tangled. Teams interfered with each other’s work. Release risk increased. Even small changes required deep knowledge of unrelated parts of the system.
As early as 2015, engineers warned that the largest tables would no longer fit on a single server. Over the following years, the team applied vertical sharding. Tables and workloads moved across separate database servers.
The architecture also created large failure domains. A slowdown in one core system could affect many countries. Supporting cross-border transactions was almost impossible. This limitation became clear when Vinted planned to merge country portals into a single international marketplace.
These pressure patterns are not unique to Vinted. They reflect broader trends shaping second-hand marketplace platforms, especially as recommerce moves into the mainstream.
Some endpoints became expensive. A single request could trigger hundreds of database queries across dozens of shards. Latency spiked. One slow query could affect unrelated parts of the system.
Since everything was interconnected. Search issues affected checkout. Reporting jobs slowed user-facing APIs. Stability depended on every part behaving well at the same time.
....
At this point, the limits were clear. The monolith no longer supported the company’s direction. It slowed development. It increased risk. It blocked global expansion.
The team understood that incremental fixes would not be enough. The next phase required a deeper change in architecture and communication patterns.
Between 2018 and 2020, backend traffic reached peak levels of roughly 150,000 requests per second (source: Vinted engineering blog). For a Rails-based monolith, this was extreme scale. The system still worked, but the cost of keeping it stable kept rising. The original design had optimized for speed of development. Now the same design slowed teams down.
Complexity Inside the Monolith
Over time, the monolith absorbed more responsibility. Features for listings, payments, messaging, moderation, and search lived in the same codebase. Clear boundaries never existed. Logic from different domains overlapped.As a result, devs struggled to change one area without touching another. As the CTO later described, feature logic became tangled. Teams interfered with each other’s work. Release risk increased. Even small changes required deep knowledge of unrelated parts of the system.
Database Pressure
The database layer reached its limits next. Early on, country-based separation delayed hard scaling problems. In other words, the largest portals outgrew their databases.As early as 2015, engineers warned that the largest tables would no longer fit on a single server. Over the following years, the team applied vertical sharding. Tables and workloads moved across separate database servers.
Latency at Global Scale
Geography added another layer of strain. As the platform expanded across Europe, latency became visible to users. Data lived in specific regions. Requests crossed data centers. Page loads slowed.The architecture also created large failure domains. A slowdown in one core system could affect many countries. Supporting cross-border transactions was almost impossible. This limitation became clear when Vinted planned to merge country portals into a single international marketplace.
Unpredictable Traffic
Growth changed traffic patterns. Seasonal peaks became intense. Internally, the team even named this period “Vinted Autumn,” when usage surged rapidly.These pressure patterns are not unique to Vinted. They reflect broader trends shaping second-hand marketplace platforms, especially as recommerce moves into the mainstream.
Some endpoints became expensive. A single request could trigger hundreds of database queries across dozens of shards. Latency spiked. One slow query could affect unrelated parts of the system.
Since everything was interconnected. Search issues affected checkout. Reporting jobs slowed user-facing APIs. Stability depended on every part behaving well at the same time.
....
At this point, the limits were clear. The monolith no longer supported the company’s direction. It slowed development. It increased risk. It blocked global expansion.
The team understood that incremental fixes would not be enough. The next phase required a deeper change in architecture and communication patterns.
Domain-Driven Design and Microservices
Vinted’s first serious response to scaling pressure was not technical. It was conceptual. Around 2015, the team reached a clear conclusion: breaking the monolith without understanding it would only move problems, not solve them.
As the engineering team later explained on the Vinted Engineering Blog:
As the engineering team later explained on the Vinted Engineering Blog:
“Our first step wasn’t to break the monolith apart. It was to understand it. A few engineers introduced Domain-Driven Design as a way to map responsibilities and expose natural boundaries inside the application.”
Dejan Menges
This decision shaped everything that followed.
Domain-Driven Design as a Discovery Tool
Domain-Driven Design (DDD) became the lens through which the team examined the system. The goal was simple in theory and demanding in practice. Identify what the platform actually does, group related responsibilities, and give those groups clear names.
This process took time. By Vinted’s own account, it took at least two years to map the system. As a result, engineers identified roughly 300 business domains inside what had previously been treated as one application.
These domains covered areas such as:
- Catalog and item matching
- Transactions and orders
- Payments
- Shipping and logistics
- User accounts and authentication
- Messaging
- Search and discovery
- Feeds and recommendations
- Categories and taxonomy
- Notifications
And many more beyond these examples.
This changed how teams worked. Ownership became clear. Each domain had a responsible team. Responsibilities that had been mixed together became visible. Development speed improved without changing any infrastructure.
Vinted staff engineer summarized:
“DDD didn’t give us all the answers, but it gave us the vocabulary to find them. It gave teams clarity about what they owned. It highlighted places where responsibilities were tangled together. It made development faster simply because people weren’t stepping on each other’s toes anymore.”
Dejan Menges
From Domains to Services
Once boundaries were clear, the path toward microservices became safer. The domain map acted as a blueprint. Not every domain became its own service. That would have created unnecessary overhead. Instead, the team focused on domains that needed independent scaling, frequent changes, or strong isolation.This transition was gradual. There was no single rewrite. The monolith shrank as services grew around it.
This shows how marketplace architectures typically evolve at scale, where domain clarity matters more than technology speed.
Tech Stack Modification
This move also changed the technology stack. Ruby on Rails remained central, especially where business logic and team familiarity mattered most. At the same time, the team introduced Go for new services that required high concurrency and predictable performance.By 2023, Vinted engineers had written more than 600,000 lines of Go code. That represented about 23% of the backend codebase. Ruby stayed the default choice. Go became the tool for workloads where Ruby would struggle. In some cases, Rust was used for very specific, performance-critical tasks.
....
Read more about microservices architecture: How to Create Microservices Architecture - Our Experience
Moving to Event-Driven Architecture
As Vinted expanded across Europe, the United States, and Canada, one problem became impossible to ignore. How to serve requests from many regions with consistent data while maintaining fast response times?
The team made a clear decision early on. They would not shard core write models across regions. Instead, they kept a single source of truth for writes and pushed read data closer to users.
As Vinted engineer explained in Vinted engineer blog:
“One of the biggest decisions we made was choosing not to shard our primary data models across regions. Instead, we embraced a model where all writes happen in the primary site and read-only projections are replicated around the world.”
Dejan Menges, Staff engineer at Vinted
This choice defined the next architectural phase.
To address this, Vinted shifted toward asynchronous communication. Services stopped calling each other directly for non-critical work. Instead, they published events.
When something important happened, ( item listing or an order placement) the owning service emitted an event describing the change. Other services subscribed to these events and reacted independently.
This approach reduced coupling. A catalog service no longer needed to know which systems cared about new listings. Search, recommendations, analytics, and feeds could all listen and react on their own terms.
A purchasing flow shows this well. The order and payment confirmation remained synchronous. After that, events triggered secondary actions. These included notifications, search updates, feed changes, and internal metrics. The buyer received fast feedback. The system caught up moments later.
Vinted adopted the Saga pattern to handle these workflows. Each service performed a local transaction and emitted events. If a step failed, compensating actions followed. So payments could be refunded and reservations could be released.
This required a mindset shift. As one engineer wrote: "distributed systems demand the opposite mindset of a monolith".
....
Event-driven architecture changed how teams worked. Engineers designed services around guarantees and recovery, not just endpoints. Discussions focused on data freshness, ordering, and failure scenarios. By the early 2020s, Vinted described the platform as a hybrid system.
Moving Away From Synchronous Coupling
In the monolith, most interactions happened through direct calls. One action triggered many others in a tight sequence. At scale, this model broke down. Each call added latency. Each dependency increased failure risk.To address this, Vinted shifted toward asynchronous communication. Services stopped calling each other directly for non-critical work. Instead, they published events.
When something important happened, ( item listing or an order placement) the owning service emitted an event describing the change. Other services subscribed to these events and reacted independently.
This approach reduced coupling. A catalog service no longer needed to know which systems cared about new listings. Search, recommendations, analytics, and feeds could all listen and react on their own terms.
Improved Latency and UX
Events also shortened the critical path for users. Only essential actions stayed synchronous. Everything else moved out of the request flow.A purchasing flow shows this well. The order and payment confirmation remained synchronous. After that, events triggered secondary actions. These included notifications, search updates, feed changes, and internal metrics. The buyer received fast feedback. The system caught up moments later.
Embracing Eventual Consistency
Marketplace workflows span many domains. Orders touch catalog, payments, shipping, and trust systems. In a monolith, teams often try to wrap this logic in one transaction. In distributed systems, that approach fails.Vinted adopted the Saga pattern to handle these workflows. Each service performed a local transaction and emitted events. If a step failed, compensating actions followed. So payments could be refunded and reservations could be released.
This required a mindset shift. As one engineer wrote: "distributed systems demand the opposite mindset of a monolith".
....
Event-driven architecture changed how teams worked. Engineers designed services around guarantees and recovery, not just endpoints. Discussions focused on data freshness, ordering, and failure scenarios. By the early 2020s, Vinted described the platform as a hybrid system.
Running the Platform Across Regions
Scaling across continents forced Vinted to make a clear architectural call. The team chose centralized writes and distributed reads. This decision avoided the complexity of multi-master systems while keeping the platform fast for users worldwide.
All write operations go to a single primary region. Listings, orders, and payments are created there. Other regions serve read-only data through replicated projections and caches. Vinted wrote: “all writes happen in the primary site and read-only projections are replicated around the world.”
This model sacrifices immediate global consistency for simplicity and reliability. A user in one region may see an update a few seconds later than a user in another region. Vinted accepted this delay. The alternative would require conflict resolution and distributed consensus, which adds risk without clear user value.
To support fast local reads, the platform relies on data projections. These are read-optimized views built from event streams. Features such as search, feeds, and listing pages query these projections instead of core databases. Each projection can use its own schema and indexes, tuned for the region it serves.
Events keep projections in sync. If a region falls behind due to network issues, it can replay events and recover automatically.
In practice, this architecture delivers low-latency browsing and predictable performance at scale. Core transactions remain consistent.
Changes in Search, Data, and Infrastructure
Search became one of the first subsystems to hit hard limits. Early on, the platform moved from Sphinx to Elasticsearch around 2014–2015. At the time, this was a solid step forward. Elasticsearch supported full-text queries, filters, and faceted navigation at scale. For years, it handled item discovery across tens and then hundreds of millions of listings.
By 2023, the numbers changed the equation. The catalog grew toward one billion active listings. Search traffic reached roughly 20,000 queries per second. At peak, p99 latency needed to stay under 150 milliseconds. The Elasticsearch setup had expanded to six clusters with about twenty large nodes each. Every node ran on machines with 128 CPU cores and 512 GB of RAM. Managing shards, hot nodes, and rebalancing became a constant operational burden.
At the same time, search requirements evolved. The team wanted vector search, deeper ranking, and better support for recommendation use cases. Elasticsearch could do parts of this, but not efficiently at that scale. The decision was made to replace it.
By late 2023, item search traffic fully migrated to Vespa. The impact was measurable. Infrastructure shrank to a single cluster of about sixty content nodes. Query latency improved by roughly 2.5×. Indexing latency dropped from about 300 seconds to roughly 5 seconds at p99. Updates now flow through Apache Flink into the search index almost in real time.
This change also unlocked better relevance. Vespa allowed ranking over hundreds of thousands of candidates per query and supported vector-based similarity alongside keyword search. The same system now powers search, feeds, and recommendation use cases. Search stopped being just retrieval and became a core ranking engine.
Database Scaling and Core Data
Search was not the only pressure point. The MySQL layer had been stretched through years of vertical sharding. By 2021–2022, the core databases were spread across more than forty physical servers. Each autumn peak pushed the system close to failure.The team adopted Vitess to move from vertical to horizontal scaling. Vitess added sharding, routing, and connection pooling on top of MySQL, while keeping the relational model intact. This reduced operational risk and allowed resharding without downtime. Vitess was even added to CI environments, so tests ran against a sharded setup instead of a single database.
From Batch to Real Time
Data pipelines also evolved. Earlier batch jobs gave way to streaming. Kafka and Flink now move events through the system in near real time.Search indexes, projections, and analytics update continuously instead of waiting for scheduled jobs. This approach allows faster feedback loops and fresher user experiences.
Infrastructure Maturity
The infrastructure followed the same path. Bare metal did not disappear, but containerization and orchestration became standard. Distributed systems demanded better observability, stricter security, and stronger automation. Monitoring, tracing, and intrusion detection became part of daily operations, not side projects.Engineering Processes and Culture
Vinted’s architecture works because the engineering culture supports it. As the team grew from a handful of engineers to more than 240 by 2021 and over 50 teams by 2025, the company invested as much in process as in technology.
The Current Tech Stack (2026)
“Looking at this stack as a CTO, I see restraint more than ambition, and that is a compliment. What matters most is what was avoided. No big-bang rewrites, no single “standard” language, no platform dogma. This is how systems survive growth without collapsing under their own architecture.”
Wojciech Andruszkow, CTO in Ulan Software
Ruby stayed the default choice. Go became the tool for workloads where Ruby would struggle.
This balance reflects how experienced teams approach selecting a tech stack for a marketplace platform, based on constraints rather than trends.
This balance reflects how experienced teams approach selecting a tech stack for a marketplace platform, based on constraints rather than trends.
Key Takeaways from Vinted’s Journey
Vinted’s 18 year path, from a small website to a €5 billion platform, shows lessons that apply to almost every C2C marketplace scenario. To sum up:
- Start simple and stay pragmatic.
- Understand before refactoring. Domain-Driven Design helped Vinted see the system clearly before breaking it apart. This saved years of rework.
- Own what users trust most. Logistics, payments, escrow, and reputation systems are not add-ons. They are core infrastructure, as Vinted Go shows.
- Choose stable technology on purpose. Ruby, MySQL, and Kafka are not exciting, but they scale when used well and understood deeply.
- Design for global scale with intent. Eventual consistency is not a flaw. It is a trade-off that enables predictable performance across regions.
The core lesson is simple. Technology matters, but sequencing matters more.
At Ulan Software, these lessons come from years of building and scaling marketplace platforms that face similar technical and organizational challenges.
This article closes Part I of our Vinted case study, focused on technology and architecture. In the next parts we examine Vinted’s business model and strategic decisions.
Karol Andruszków
Karol is a serial entrepreneur who has successfully founded 4 startup companies. With over 11 years of experience in Banking, Financial, IT and eCommerce sector, Karol has provided expert advice to more than 500 companies across 15 countries, including Poland, the USA, the UK, and Portugal.
Table of Contents:
Recommended Articles
Wed, Jan 21
How to Build Second-Hand Marketplace Platform Like Vinted? Part I: Technology
How Vinted scaled a C2C marketplace from a Ruby monolith to an event-driven platform. Explore a CTO-level analysis of architecture, search, data, and global scale.
Sat, Jan 17
8 Second-Hand Marketplace Platforms Trends: How Recommerce is evolving in 2026
Second-hand marketplaces are evolving fast. Explore 8 key recommerce trends shaping platform strategy, technology, and growth in 2026.
Tue, Jan 13
From Law Firm to Platform: When a Legal Services Marketplace Makes Sense
When does a marketplace for legal services make sense? Learn when law firms should shift from a traditional model to a platform and when they should not.