Skip to main content

Everything I Knew About Spring Security Was Built for a Monolith

M
Markian Mumba inTech blog
10 min read·Mar 24, 2026
Microserviceauthenticationauth-serverjwtsoauth2openID
La Mousmé
La Mousmé,1888,Vincent van Gogh

Over-Engineered on Purpose — Part 6: Authentication in Microservices, and Why My First Approach Was Wrong

This is Part 6 of a series where I'm building a microservice platform from scratch. Part 1 covers the architecture, Part 5covers gRPC load balancing. Full codebase on GitHub.

I've implemented Spring Security in monoliths probably a dozen times. Add the dependency, configure the filter chain, set up a UserDetailsService, add a JWT filter, done. I could do it on autopilot.

So when it came time to secure my microservices, I thought I knew what I was doing.

I didn't.

Not because the individual concepts were wrong — JWT validation, filter chains, SecurityContextHolder — all of that still applies. But the assumptions underneath them? Those are monolith assumptions. And they break in ways I didn't see coming.

How Security Works in a Monolith (The World I Knew)

Before getting into what broke, let me lay out what I was working from. If you've done Spring Security before, this will feel familiar. If you haven't, this is the mental model I was carrying into the microservice world.

In a monolith, the security flow is something like this:

A user hits the login endpoint. The request passes through the security filter chain, but /auth/login is on the permit list so it goes through. In the login service, you create a UsernamePasswordAuthenticationToken with the email and password — this token is unauthenticated, it's just saying "here are credentials, please verify them."

You pass it to the AuthenticationManager, which delegates to a DaoAuthenticationProvider. The provider calls your UserDetailsService to load the user by email, then uses BCryptPasswordEncoder to compare the raw password against the stored hash. If it matches, the provider creates a new UsernamePasswordAuthenticationToken — this time authenticated, with the user as the principal and their roles as granted authorities.

You take that authenticated user, generate a JWT with their claims (user ID, email, role, expiration), and send it back.

From that point, every request carries the JWT in the Authorization: Bearer header. The JwtAuthenticationFilter — which extends OncePerRequestFilter — intercepts each request, extracts the token, validates the signature, checks expiration, loads the user from the database, creates an authentication object, and sets it in the SecurityContextHolder.

The SecurityContextHolder is thread-local storage. Each request thread gets its own isolated security context. After the filter runs, Spring Security knows who you are. @PreAuthorize("hasRole('ADMIN')") works. SecurityContextHolder.getContext().getAuthentication() gives you the current user. Everything is contained in one process, one thread, one application.

This model works beautifully when everything lives in the same JVM.

Assumption 1: The Security Context Propagates

It doesn't.

In a monolith, when your controller calls a service, which calls a repository, the SecurityContextHolder is available at every layer. It's thread-local — same thread, same context, same authenticated user everywhere in the call chain.

In my microservice setup, the BFF validates the JWT and sets the security context. Great. But when the BFF makes a gRPC call to the Booking Service, that's a network call to a different JVM running on a different port. The SecurityContextHolder doesn't cross that boundary. The Booking Service has no idea who made the original request.

The security context is request-scoped and thread-local. It does not propagate across network boundaries. This seems obvious in hindsight, but when you've spent years in a world where SecurityContextHolder.getContext() always just works, you don't think about it.

So how does the Booking Service know who's making the request? You have to explicitly pass that information. Either you forward the JWT in the gRPC metadata, or you extract the user ID and include it in the request message itself.

Assumption 2: One Shared Secret Is Fine

My first approach was the monolith approach: generate JWTs in the User Service with a shared HMAC secret key, validate them in the BFF with the same secret. Same key in both services. Same application.yml property.

# User Service
jwt:
  secret-key: ${JWT_SECRET_KEY}
# BFF
jwt:
  secret-key: ${JWT_SECRET_KEY}  # Must match User Service

This works. But it opens questions fast. If the Booking Service also needs to validate tokens (because we're forwarding JWTs through gRPC calls), it needs the secret too. Now the Catalog Service needs it. The Notification Service might need it. Suddenly every service in the system has the signing secret. Any service that gets compromised can forge tokens for any user. In a monolith, there's one secret in one place. In microservices, "shared secret" means the secret is everywhere. And a secret that's everywhere isn't really a secret.

Assumption 3: Security Lives in One Place

In a monolith, security is a cross-cutting concern handled by the filter chain. One configuration class, one filter, done.

In my microservice architecture, security decisions are distributed:

The BFF needs to validate user JWTs from the frontend, hold sessions, handle OAuth2 flows, and attach tokens to outgoing gRPC calls.

Each microservice needs to validate incoming JWTs on gRPC calls — but using server interceptors, not servlet filters. Each one needs to extract user information from the token and make it available to the service logic.

Service-to-service calls need their own authentication. When the Booking Service calls the Catalog Service to check machine availability, that's not a user request. It's an internal service call. Where does the token come from? The user's JWT might work for forwarding, but what about background jobs? What about a cron service that checks maintenance schedules and sends notification emails? There's no user in that flow. There's no JWT to forward.

This is where I started realizing my initial JWT approach wasn't going to scale. I needed different token types for different scenarios — user tokens for user-initiated flows, service tokens for service-to-service calls. And I needed a centralized place to issue and validate all of them.

A Side Note: Spring Security and gRPC

Quick thing that bit me while setting up security. My services run two servers — Tomcat for HTTP (health checks, actuator, Eureka registration) and Netty for gRPC. I figured I'd add Spring Security to protect the Tomcat side. Couldn't hurt, right?

The moment Spring Security hit the classpath, my gRPC calls started failing with UNAUTHENTICATED errors. Turns out Spring gRPC has its own security auto-configuration. From the docs: if Spring Security is on the classpath, it automatically adds Basic auth to your gRPC server — not just the servlet side. I didn't expect it. But there it was, blocking every gRPC call because no Basic auth credentials were being sent.

I ended up not including Spring Security on the individual services at all. The lesson: with Spring gRPC, security on the classpath means security on both servers. If you're planning to handle gRPC authentication your own way (which I am, with JWT interceptors), either keep Spring Security off the classpath entirely or explicitly configure GrpcSecurity to permitAll() and handle it yourself.

What I Tried First (The Passthrough Approach)

Before redesigning everything, I tried the simplest thing that could work.

The BFF validates the user's JWT. When making a gRPC call, it passes the JWT along as metadata in the gRPC request. The receiving service extracts the JWT from the metadata, validates it against the shared secret, and extracts the user information.

For gRPC, passing data alongside the request uses interceptors — conceptually similar to middleware. A client interceptor on the BFF side adds the JWT to outgoing calls. A server interceptor on the service side reads it from incoming calls.

User Request (JWT in header)
  → BFF validates JWT
    → Client Interceptor adds JWT to gRPC metadata
      → Catalog Service Server Interceptor reads JWT
        → Validates against shared secret
          → Extracts user info, processes request

This worked for user-initiated flows. But the problems I identified earlier remained:

The secret key is in every service. If the Booking Service needs to call the Catalog Service, it forwards the user's JWT — but that token was issued for the user, not for the Booking Service. And for background processes with no user context, there's no token to forward at all.

I also ran into the video that changed my thinking. A talk on securing microservices with JWTs that laid out three approaches:

Passthrough — forward the user's JWT to downstream services. Each service validates it with the shared secret. Simple, but the shared secret problem.

Pass subset — parse the JWT at the BFF, extract the claims, pass them as metadata with an API key. Services trust the API key and use the claims. Avoids passing raw tokens but introduces an API key management problem.

Reissue — the BFF validates the user's JWT and creates a new internal token for downstream calls. Better isolation, but now the BFF is issuing tokens, which is a lot of responsibility for a gateway.

All great methods.What I needed was a dedicated service whose only job is authentication and token issuance.

An authorization server.

The Shift in Thinking

The realization that pushed me forward was this: in a monolith, the application IS the authority on identity. It stores passwords, it signs tokens, it validates tokens. It's all one thing.

In microservices, you need to separate the authority from the consumers. One service signs tokens. Every other service verifies them. And the mechanism for verification shouldn't require sharing secrets — it should use public key cryptography, where the signing key is private and the verification key is public.

This is what OAuth2 and OpenID Connect provide. An authorization server holds the private key, signs tokens, and publishes the public key at a well-known URL (the JWKS endpoint). Any service can fetch the public key and verify tokens without ever having access to the signing key.

No shared secrets. No token forgery risk from a compromised service. Services validate tokens independently.

This is where Part 7 picks up — setting up Spring Authorization Server, implementing the OAuth2 Authorization Code flow with the BFF pattern, and building the gRPC interceptor infrastructure that makes tokens flow through the system automatically.

What I Learned

The individual security concepts from monolith development — JWTs, filter chains, authentication providers, security contexts — all still matter. They're the vocabulary. But the architecture around them changes completely.

In a monolith, security is a configuration problem. In microservices, security is a distributed systems problem. Where do tokens come from? How do they propagate? How do services authenticate to each other without a user in the loop? How do you secure two different transport protocols running in the same application?

These aren't questions you encounter building monoliths. They're questions the architecture forces you to answer. And the answer — for me at least — pointed to one thing: stop trying to do security the monolith way. Centralize the authority. Distribute the verification. Trust the cryptography.

Resources That Helped

  • Securing Microservices with JWTs — This video laid out the passthrough, pass-subset, and reissue patterns for JWT propagation in microservices. Helped me understand the tradeoffs before committing to an approach.

Reviews (0)

No reviews yet. Be the first to share your thoughts!