Back to projects
Front-end

SIPGR - Engineering Case Study

Building a federated SaaS platform for public policy analytics — with auth that works across government subdomains and charts that don't lie about the data.

SIPGR - Engineering Case Study

Role

Design Engineer (Front-end)

Timeline

Aug 2025 – Jan 2026

Tools

Nuxt.js 3, Vue.js 3, TypeScript, Tailwind CSS, Chart.js

The Challenge

Building for government data isn't the same as building for users. It's both — simultaneously — and they want different things.

Public policy consultancy work is invisible in most tech conversations. But the infrastructure underneath it — the platforms that analysts use to understand where public money goes, how social indicators shift, which policies are working — those systems carry real weight.

SIPGR was one of those systems. A SaaS platform for data analysis and visualization, built for government clients across multiple agencies, under Macroplan Consultoria e Analytics. Six months. A cross-functional remote squad. Confidential datasets I couldn't share with anyone outside the project.

The technical complexity wasn't in any single feature. It was in the combination:

  • Multiple client-facing subdomains, each needing authentication — but all sharing the same user identity
  • Analytical dashboards with complex multi-series charts over large government datasets
  • Data tables with tens of thousands of rows that needed to feel responsive, not just functional
  • A component architecture that had to scale across platforms without a dedicated design team to maintain it
  • A fully remote squad operating in agile sprints — where communication overhead is its own kind of technical debt

The hardest problems weren't on the PRD. They were the architectural ones nobody writes tickets for.


Architecture Decisions

1. Cross-Domain SSO: The Token Sharing Problem

The problem

SIPGR wasn't a single platform — it was a federation of subdomain-specific applications sharing a user pool. A user authenticated on one subdomain needed to be recognized on another without re-authenticating. But tokens stored in localStorage don't cross origins, and redirect-based OAuth flows would have made every navigation feel like a login wall.

Options considered

ApproachProCon
localStorage + postMessageSimple per-appCross-origin communication is fragile; no SSR support
Full OAuth + Redis token storeRobust, scalableHeavy infrastructure; overkill for same-TLD context
Wildcard cookie on shared domainWorks transparently across all subdomainsRequires same TLD; limited cross-origin flexibility
Central auth app + iframe bridgeEstablished patternComplex to set up and debug

Decision made

Wildcard cookies on the shared root domain, with Secure flags and a token refresh composable that runs on every app initialization. Nuxt's server-side capabilities handled cookie reading on SSR, ensuring authenticated state was available before the first paint — not patched in on the client after hydration.

Tradeoff accepted

This approach only works within the same top-level domain. If the architecture ever required third-party subdomain integration, we'd need a different strategy. For the current scope, that constraint was acceptable — and the simplicity it bought us was worth more than the theoretical flexibility we gave up.


2. Dashboard Performance: Chart.js over the Alternatives

The problem

Government analytics dashboards are data-dense by nature. I was looking at time-series charts with multiple overlapping datasets, stacked bar charts for budget breakdowns, and scatter plots for policy correlation analysis. The risk was a rendering pipeline that couldn't keep up with data updates — especially on lower-end government-issued hardware.

Options considered

  • D3.js — Maximum control, but SVG rendering at scale has real performance ceilings and a steep learning curve for contributors
  • ECharts — Excellent performance, but heavyweight bundle and a more complex API surface
  • Chart.js — Canvas-based rendering, tree-shakeable, solid Vue integration via vue-chartjs
  • Custom canvas — Full control, zero dependencies, extremely high implementation cost

Decision made

Chart.js via vue-chartjs, with canvas rendering as the performance baseline. Charts were wrapped in a composable that handled initialization, reactive data updates, and resize observation — so the component layer stayed thin and the logic stayed testable.

Tradeoff accepted

Chart.js isn't as visually flexible as D3 for highly custom visualizations. Some stakeholder requests for non-standard chart types required creative workarounds using Chart.js plugins. But for the 90% case — clean, responsive, performant data charts across diverse datasets — it was the right call.


3. Component Architecture: Design System Without a Design System Team

The problem

Multiple platforms. One developer owning the front-end architecture. No dedicated design system team. Building components ad-hoc across platforms would have meant drift — subtle inconsistencies in spacing, color, interaction patterns — that compound over months into a product that feels incoherent even if nothing is technically wrong.

Options considered

  • Monorepo with a shared npm package — Cleanest separation, but requires versioning discipline and a publish pipeline
  • Copy-paste "design system" — Fast to start, catastrophic to maintain
  • Shared Tailwind config + composable-driven components within the Nuxt app — Tight coupling to the stack, but zero infrastructure overhead

Decision made

A composable-first component library within the Nuxt ecosystem. Tailwind config as the single source of design tokens. Composition API composables for all stateful behavior. slots and props for component flexibility. No external packaging — just clear internal structure and naming conventions the team could follow without documentation overhead.

Tradeoff accepted

This doesn't scale to truly cross-framework contexts. It's Nuxt-specific. But so was the project — and optimizing for a hypothetical future architecture at the cost of current development velocity would have been the wrong tradeoff.


Key Technical Implementations

Federated Auth: The useAuth Composable

The core of the SSO strategy was a single composable that every app consumed. It reads the token from cookies (SSR-compatible via useCookie), validates the session, and handles refresh logic without exposing implementation details to page components.

// composables/useAuth.ts
export function useAuth() {
  const token = useCookie('auth_token', {
    domain: '.macroplan.com.br',
    secure: true,
    httpOnly: false, // needs to be readable client-side for API headers
    maxAge: 60 * 60 * 8 // 8-hour session
  })

  const user = useState<User | null>('user', () => null)
  const isAuthenticated = computed(() => !!token.value && !!user.value)

  async function initialize() {
    if (!token.value) return
    try {
      user.value = await $fetch('/api/auth/me', {
        headers: { Authorization: `Bearer ${token.value}` }
      })
    } catch {
      token.value = null // invalidate on fetch failure
    }
  }

  async function refresh() {
    const newToken = await $fetch<string>('/api/auth/refresh', {
      method: 'POST',
      headers: { Authorization: `Bearer ${token.value}` }
    })
    token.value = newToken
  }

  return { token, user, isAuthenticated, initialize, refresh }
}

The key design decision: useState for the user object means the state is shared across all components in the Nuxt app without a Pinia store. The cookie handles persistence across subdomains. The composable owns the contract.


Chart.js: The useChart Composable

The problem with Chart.js in Vue is lifecycle management. You need to create the chart after the canvas mounts, destroy it before the component unmounts, and react to data changes without creating duplicate instances. Doing this inline in every component leads to copy-paste bugs. Wrapping it in a composable solved all three at once.

// composables/useChart.ts
export function useChart<T extends ChartType>(
  canvasRef: Ref<HTMLCanvasElement | null>,
  type: T,
  options: ComputedRef<ChartConfiguration<T>>
) {
  let chart: Chart<T> | null = null

  function init() {
    if (!canvasRef.value) return
    chart = new Chart(canvasRef.value, options.value)
  }

  function destroy() {
    chart?.destroy()
    chart = null
  }

  watch(options, (newOptions) => {
    if (!chart) return
    chart.data = newOptions.data
    chart.options = newOptions.options ?? {}
    chart.update('active')
  }, { deep: true })

  onMounted(init)
  onBeforeUnmount(destroy)

  return { chart: readonly(ref(chart)) }
}

Usage at the component level becomes almost declarative — the component knows nothing about Chart.js internals:

<script setup lang="ts">
const canvas = ref<HTMLCanvasElement | null>(null)
const chartConfig = computed(() => buildBudgetChartConfig(props.data))
useChart(canvas, 'bar', chartConfig)
</script>

<template>
  <canvas ref="canvas" />
</template>

Server-Side Pagination: The usePagination Composable

Government data tables can have 50,000+ rows. Client-side pagination is not an option. The usePagination composable wraps the full cycle — page state, API calls, loading states, and error handling — with a generic type parameter so it works for any resource without duplication.

// composables/usePagination.ts
export function usePagination<T>(
  fetchFn: (page: number, pageSize: number) => Promise<PaginatedResponse<T>>,
  initialPageSize = 20
) {
  const page = ref(1)
  const pageSize = ref(initialPageSize)
  const total = ref(0)
  const items = ref<T[]>([]) as Ref<T[]>
  const loading = ref(false)
  const error = ref<string | null>(null)

  const totalPages = computed(() => Math.ceil(total.value / pageSize.value))
  const hasNext = computed(() => page.value < totalPages.value)
  const hasPrev = computed(() => page.value > 1)

  async function load() {
    loading.value = true
    error.value = null
    try {
      const result = await fetchFn(page.value, pageSize.value)
      items.value = result.data
      total.value = result.total
    } catch {
      error.value = 'Failed to load data'
    } finally {
      loading.value = false
    }
  }

  function next() { if (hasNext.value) { page.value++; load() } }
  function prev() { if (hasPrev.value) { page.value--; load() } }
  function goTo(p: number) { page.value = p; load() }

  watch([page, pageSize], load, { immediate: true })

  return { items, page, pageSize, total, totalPages, hasNext, hasPrev, loading, error, next, prev, goTo }
}

Adding a new paginated data view became a matter of providing a fetchFn and consuming the returned state. The first table took hours to get right. Every table after that was minutes.


Design System Angle

My positioning as a Design Engineer means the gap between design intent and implementation is my problem to solve — not to hand off. On SIPGR, that translated into a few concrete principles that shaped every component I shipped.

Design tokens live in config, not in components. Tailwind's config file became the single source of truth for the color system, spacing scale, and typography. If a brand color changed, it changed everywhere — not across thirty component files touched one by one.

Components expose behavior through slots, not through prop explosion. A BaseTable component shouldn't accept 40 props to handle every display variant. It should expose column slots so each implementation can customize rendering without touching the component's internal logic. The difference between a "configurable" component and a "composable" one is exactly this.

Composables are the design system layer for stateful UI. Loading states, pagination, form validation, chart lifecycle — these are shared UI patterns with shared implementation needs. Extracting them into composables means a developer joining the project picks up established patterns immediately, rather than reinventing them in each feature.

The result wasn't a published design system. It was a codebase with a clear internal grammar — one that made the right patterns easy to reach and the wrong ones harder to stumble into.


AI-Accelerated Development

I used Claude Code throughout the project as a development partner, not just a code autocomplete. The distinction matters.

Composable scaffolding. When I identified a new shared pattern — say, a composable for form validation with API error mapping — I'd describe the interface I needed in natural language and get a working scaffold back in seconds. I'd review it, adjust the TypeScript types, and integrate it. The cognitive overhead of bootstrapping went away.

Refactoring sessions. After the first month, I had three components with near-identical Chart.js setup logic. I described the pattern to Claude Code, asked it to extract a composable, and reviewed the result against all three original implementations. What would have been a 90-minute manual refactor became a 20-minute review session.

Code review before PRs. Before opening a pull request, I'd share the diff and ask Claude Code to check for TypeScript type mismatches, edge cases in async logic, and accessibility issues in new components. It caught a silent error in the pagination watch trigger that would have caused double API calls on mount — the kind of bug that only surfaces in production under specific timing conditions.

Component API documentation. For every composable I shipped, I'd ask Claude Code to generate JSDoc-style documentation from the implementation. Accurate, consistent, and actually maintained — without the documentation overhead that usually kills internal tooling.

The net effect: I moved faster on implementation work, spent more cognitive energy on architecture decisions, and shipped with fewer regressions. The AI didn't replace engineering judgment. It removed the friction that makes good engineering feel expensive.


Results

The engagement ran for six months across two major delivery phases. The data is confidential, but the delivery outcomes are clear:

  • Cross-domain SSO shipped and held — zero reported auth issues across the full engagement: no session corruption, no token leakage between subdomains
  • Analytical dashboards rendered complex multi-series configurations within acceptable performance budgets on the client's hardware profile
  • Server-side pagination kept data tables responsive at dataset sizes that would have frozen a client-side implementation
  • Each successive feature took measurably less time — adding a new paginated data table became a fraction of the effort the first one required, once the composable infrastructure was in place
  • Delivered across agile cycles with a cross-functional remote squad — design, back-end, and product — with async communication as the primary coordination mechanism, and no sprint spills attributed to front-end blockers

What I Learned

1. Auth complexity hides below the surface

Wildcard cookies seem like a one-line solution. They're not. Secure flags, readable vs HttpOnly cookies, SSR vs client-side access, token invalidation on the server, expiry alignment across apps — each decision has a downstream consequence. The architecture for auth needs to be established early and revisited carefully. Retrofitting it into an existing system is expensive in ways that are hard to estimate upfront.

2. Composables are infrastructure, not utilities

Early in the project, I wrote composables to reduce repetition. By the end, I understood them differently: they're the load-bearing walls of Vue front-end architecture. useAuth, usePagination, useChart weren't conveniences — they were the project's internal API contract between features and shared behavior. Naming them well, designing their interfaces carefully, and keeping them focused turned out to matter as much as writing them correctly.

3. AI tools change the economics of quality

The parts of engineering that used to be expensive — scaffolding, refactoring, documentation, pre-PR review — have a different cost structure when you have an AI development partner. That doesn't make the work easier. It changes where the hard work is. The architectural decisions, the tradeoff analysis, the product judgment — those remain entirely human. But the mechanical cost of doing good engineering drops significantly. The real opportunity is using that reclaimed time on the things that actually require judgment.