Sharing Custom Scalars Across Multiple Subgraphs

When scaling GraphQL Federation, engineering teams frequently encounter schema drift caused by duplicating type definitions across independent services. While Subgraph Implementation & Entity Resolution establishes the baseline for distributed architecture, custom scalar handling requires explicit coordination to prevent gateway composition failures and runtime serialization mismatches. This guide details the exact workflow for sharing custom scalars across multiple subgraphs without introducing validation bottlenecks or inconsistent coercion behavior.

Root Cause Analysis: Why Scalar Duplication Breaks Federation

The Apollo Router and gateway enforce strict type equivalence during supergraph composition. When identical custom scalars are defined independently in separate subgraphs with mismatched serialize, parseValue, or parseLiteral implementations, the router rejects the build. Engineers often assume scalar names alone guarantee compatibility, but the underlying coercion logic must be deterministic across all services.

Composition failures typically manifest as exact errors during rover subgraph publish or rover supergraph compose:

Error: [Subgraph-A] and [Subgraph-B] define conflicting scalar definitions for 'DateTime'.
Details: The 'serialize', 'parseValue', and 'parseLiteral' implementations must be identical across all publishing subgraphs.

The router performs a structural diff of the SDL and resolver signatures. Any deviation in null-handling, strictness, or AST node traversal triggers a hard failure. Federation v2 does not merge scalar implementations; it validates them for parity.

Architecture: Centralized Definition vs. Shared Packages

The most reliable approach involves publishing a versioned, language-agnostic scalar specification alongside a shared resolver package. Rather than copying SDL snippets into each service, teams should treat custom scalars as infrastructure dependencies. This aligns with broader Custom Scalars in Federated GraphQL Schemas strategies, ensuring that every subgraph imports the exact same type definition and coercion behavior. Centralization eliminates drift and simplifies platform-wide updates.

Migration Path:

  1. Extract inline scalar definitions from individual subgraph repositories.
  2. Publish a shared npm/internal package containing the SDL string and resolver map.
  3. Update all subgraphs to import the package and remove local scalar definitions.
  4. Pin the package version across services to guarantee synchronized deployments.

Step-by-Step Implementation: Defining, Exporting, and Composing

1. Shared Scalar Definition & Resolver Map

Define the scalar using the GraphQL spec-compliant API in a centralized package. Export both the SDL and the resolver map.

// packages/shared-scalars/src/DateTime.ts
import { GraphQLScalarType, Kind } from 'graphql';

export const DateTimeScalar = new GraphQLScalarType({
 name: 'DateTime',
 description: 'ISO 8601 formatted date-time string',
 serialize(value: unknown): string {
 if (value instanceof Date) return value.toISOString();
 if (typeof value === 'string') return new Date(value).toISOString();
 throw new TypeError('DateTime.serialize expected Date or ISO string');
 },
 parseValue(value: unknown): Date {
 if (typeof value !== 'string') throw new TypeError('DateTime.parseValue expected string');
 const date = new Date(value);
 if (isNaN(date.getTime())) throw new TypeError('Invalid DateTime format');
 return date;
 },
 parseLiteral(ast: any): Date {
 if (ast.kind !== Kind.STRING) throw new TypeError('DateTime.parseLiteral expected StringValueNode');
 return this.parseValue(ast.value);
 }
});

export const dateTimeSDL = 'scalar DateTime';

2. Subgraph Schema Composition

Attach the imported scalar to the schema builder. Do not redefine coercion logic in service-specific codebases.

// services/orders/src/schema.ts
import { buildSubgraphSchema } from '@apollo/subgraph';
import { DateTimeScalar, dateTimeSDL } from '@company/shared-scalars';
import { gql } from 'graphql-tag';

const typeDefs = gql`
 ${dateTimeSDL}

 type Order @key(fields: "id") {
 id: ID!
 createdAt: DateTime!
 shippedAt: DateTime
 }
`;

export const schema = buildSubgraphSchema({
 typeDefs,
 resolvers: {
 DateTime: DateTimeScalar,
 Order: { /* entity resolvers */ }
 }
});

3. Gateway Composition Validation Script

Run local composition checks before deployment to catch drift early.

#!/bin/bash
# scripts/validate-supergraph.sh
set -e

echo "🔍 Validating subgraph schemas..."
rover subgraph check my-graph@current \
 --schema ./services/orders/schema.graphql \
 --name orders

echo "🔨 Composing supergraph..."
rover supergraph compose \
 --config ./supergraph-config.yaml \
 --output ./supergraph.graphql

echo "✅ Composition successful. Scalar definitions are consistent."

4. End-to-End Serialization Test

Validate parseLiteral, parseValue, and serialize symmetry across mocked gateway requests.

// packages/shared-scalars/__tests__/DateTime.test.ts
import { DateTimeScalar } from '../src/DateTime';
import { Kind } from 'graphql';

describe('DateTimeScalar Symmetry', () => {
 const iso = '2024-01-15T10:30:00.000Z';
 const date = new Date(iso);

 test('parseLiteral -> serialize roundtrip', () => {
 const parsed = DateTimeScalar.parseLiteral({ kind: Kind.STRING, value: iso });
 expect(DateTimeScalar.serialize(parsed)).toBe(iso);
 });

 test('parseValue -> serialize roundtrip', () => {
 const parsed = DateTimeScalar.parseValue(iso);
 expect(DateTimeScalar.serialize(parsed)).toBe(iso);
 });

 test('rejects malformed input', () => {
 expect(() => DateTimeScalar.parseValue('not-a-date')).toThrow(TypeError);
 expect(() => DateTimeScalar.parseLiteral({ kind: Kind.INT, value: 123 })).toThrow(TypeError);
 });
});

Runtime Resolution & Serialization Consistency

Even with identical SDL, runtime mismatches occur when subgraphs use different underlying libraries for parsing or formatting. Standardize on a single serialization library across the platform. Validate that parseLiteral handles AST nodes correctly, parseValue processes JSON inputs, and serialize outputs match the expected wire format.

Query/Response Validation Example:

# Client Request
query GetOrder($id: ID!) {
 order(id: $id) {
 id
 createdAt
 }
}

Expected Gateway Response (JSON Wire Format):

{
 "data": {
 "order": {
 "id": "ord_9f8e7d",
 "createdAt": "2024-01-15T10:30:00.000Z"
 }
 }
}

If a subgraph returns 2024-01-15T10:30:00Z (missing milliseconds) or a Unix timestamp, the router will not coerce it automatically. The mismatch will surface as a GRAPHQL_VALIDATION_FAILED error at the gateway layer. Implement integration tests that mock gateway requests to verify end-to-end coercion. Pay special attention to edge cases like timezone normalization in DateTime scalars or precision handling in BigDecimal implementations.

Troubleshooting & Validation Workflows

When composition fails, follow this diagnostic path:

  1. Inspect Supergraph Diff: Run rover subgraph check --schema ./schema.graphql --name <subgraph> and review the diff output. Look for scalar DateTime definition conflicts.
  2. Isolate Definition Mismatches: Compare exported SDL across services using diff or a schema registry UI. Ensure scalar DateTime appears exactly once per subgraph with identical annotations.
  3. Trace Serialization Pipeline: If runtime errors appear, log the raw input at the resolver boundary. Common failure points include:
  • Strict vs. relaxed validation modes in JSON scalars
  • Missing null-handling in parseValue
  • Inconsistent error formatting across language runtimes (Node.js vs. Python vs. Go)
  1. Implement CI/CD Gates: Block merges when scalar resolver signatures diverge. Use rover supergraph compose in pre-merge pipelines and fail the build on non-zero exit codes.

Common Mistakes

  • Duplicating scalar resolver logic per service instead of using a shared package
  • Ignoring parseLiteral vs parseValue symmetry, causing inconsistent AST vs JSON input handling
  • Overusing @shareable on scalars when Federation v2 handles scalar equivalence implicitly
  • Skipping local composition checks before pushing to the graph registry
  • Allowing different date/timezone libraries to format identical DateTime scalars across subgraphs

FAQ

Do I need the @shareable directive for custom scalars in Federation v2?

No. Scalars are implicitly shareable in Federation v2. The directive is intended for entity fields and types. Adding @shareable to a scalar definition is redundant and can cause unnecessary schema warnings during composition.

How do I handle different serialization formats across services?

Standardize on a single parsing library and wire format at the platform level. If legacy services require different formats, implement an adapter layer that normalizes inputs before they reach the shared scalar resolver, rather than modifying the scalar itself.

Can I override a shared scalar in a single subgraph?

Overriding a scalar definition in one subgraph will trigger a composition conflict. If a service requires different behavior, define a new scalar with a distinct name (e.g., LegacyDateTime) and map it explicitly in that subgraph’s resolver layer.

What happens if the gateway encounters conflicting scalar definitions?

The router will reject the supergraph build during composition, returning a conflicting scalar definition error. You must resolve the mismatch by aligning the SDL and resolver implementations across all publishing subgraphs before the build can succeed.