Seed project
In this lesson we will learn about the seed project and give an overview of the technology stack.
Architecture
All our recent and new projects are written in TypeScript as we firmly believe in its type-safety features and the impact it has on our development flow. We also use zod that enables us to perform runtime validation using schemas but also to generate TypeScript types based from those same schemas.
Back end
The Codifly back end is a server application which can:
- Connect to a PostgreSQL database using Prisma ORM
- Handle API requests using tRPC (type-safe remote procedure calls)
- Perform business logic
- Model business objects (e.g. Users, Vehicles, Accounts, ... whatever your application is about)
The architecture of a Codifly back end has two main layers:
- Service layer: This layer accesses the database using Prisma and performs business logic. Every service function runs inside a database transaction automatically.
- tRPC layer: This layer handles incoming API requests. It does authentication (who is connecting?), authorization (does the user have permission?) and validation (is the request correctly formulated using Zod schemas?). It then calls the service layer and returns the result.
The big advantage of tRPC is that your API is fully type-safe from front end to back end. The TypeScript types are shared automatically, so if you change a field name in the API, the front end will show a compile error immediately. No more guessing what shape the data has!
Each layer has its own files. In this guide we will move from the bottom (Prisma / data) to the top (tRPC) to learn about all of these files. The project contains an example business object called a 'widget'. You can use the code associated with this widget to copy from and for the guide we will use it as an example.
Front end
The Codifly front end is a React client side rendering single page application (or CSR SPA for short) and/or React native mobile app using Expo and has the following features:
- out-of-the-box authentication flow that integrates with our API
- data fetching and global state management using the tRPC client (type-safe, no code generation needed)
- e2e tests using Cypress
- (web) development server and production build using Vite
- (web) ready-to-use components built on the Mantine component library
Setting up for the lesson
This is an interactive lesson in which we will run a standard server and client. For now we'll only use the web front end.
If you haven't done so already, create a folder somewhere (home for
example) called codifly. Next, go to the seed project on
Gitlab.
This type of repository is called a monorepo. It contains multiple
distinct projects such as web, app (front end) and api (back end).
The seed is our starter project. It contains a variety of setup and ensures that our different projects are (largely) aligned in structure and functionality and removes the tedious steps that otherwise need to be repeated with every new project. In this lesson we will go through the process of starting a new project using the seed.
We will clone the seed project and immediately call it lesson4.
mkdir codifly
cd codifly
git clone git@gitlab.codifly.be:codifly-projects/team-codifly/nest/seed/seed.git lesson4
Check the README.md file in the docs folder of the seed. It contains instructions on how to set up the project and run it.
Here is the gist of it:
check what node version is required. To change this version run:
sudo n <version>
Open the api folder in vscode and run the following commands in the terminal window to install the dependencies and run the server:
npm install
npm run dev
Do the same for the web folder. A browser window should open (or
navigate to localhost:3000) and the seed application should show. Log
in using no-reply+admin@codifly.be with password 12345678. Play around with
the application a little.
Running the project
Our api dev script uses docker compose to run our API server and a database server. It also does some additional setup like exposing and mapping the correct ports and running the database migrations as well as seeding some data (if not done so already).
Our web dev script uses Vite to start a development server that serves our bundle and provides features such as Hot Module Replacement (HMR), which can update your application on any changes without a full page reload.
So when the dev commands are finished we have a running back-end server which can communicate with a database and a front-end development server that can communicate with our back end all set up automatically. Take a minute to revel in this greatness. Or be puzzled by this information overload. No worries, eventually you will know the structure inside and out. Feel free to be curious and explore the setup and don't feel shy to ask questions about any step in the entire flow, it's the best way to learn!
Observing the widgets CRUD
To learn about the tech stack in depth it's best to observe the code associated with the widget related endpoints. We will leave the user related logic aside for now. If you haven't, run the server and play around with the Widgets CRUD. To find it go to the CMS section of the application. The user CRUD window opens when you open the CMS. There is a hamburger menu on the upper left you can use to access the Widgets CRUD.
Now get back to the api code in vscode and follow along.
Data layer: Prisma
Business objects are defined using Prisma, our ORM (Object-Relational Mapping) library. Prisma translates your TypeScript code into SQL queries under the hood and makes the database much easier to work with.
The Prisma schema lives in ./src/data/codegen/schema.prisma. Open that
file and look for the Widget model:
model Widget {
@@map("widgets")
id String @id @db.Uuid @default(uuid())
stringField String @db.VarChar(255)
regexField String @db.VarChar(255)
numberField Int
dateField DateTime @db.Date
arrayField String[] @db.VarChar(255)
nullableField String? @db.VarChar(255)
createdAt DateTime @default(now()) @db.Timestamptz(6)
updatedAt DateTime @updatedAt @default(now()) @db.Timestamptz(6)
deletedAt DateTime? @db.Timestamptz(6)
@@index([deletedAt])
}
A few things to notice:
@@map("widgets")tells Prisma that this model maps to a database table calledwidgets.- Each field has a type (
String,Int,DateTime, etc.) and optional modifiers like@db.VarChar(255)for the column type in the database. @idmarks the primary key,@default(uuid())auto-generates a UUID.String?means the field is nullable (the?).String[]is an array of strings.- Every model has
createdAt,updatedAtanddeletedAtfields. We use soft deletes, meaning we never actually remove rows from the database. Instead, we setdeletedAtto the current timestamp. This way data can always be recovered if needed.
The Prisma client is connected to the database through the code in
./src/data/index.ts. This file provides a getPrisma() function that
returns the database client. You don't need to worry about this setup,
just know it exists.
Business logic: the service layer
The ./src/services folder contains TypeScript files with functions
that perform operations on models. We organize all the functions
associated with a single model into a single file. Open
./src/services/widgets.ts.
At the top of the file you'll see the Zod schemas. These define the shape of our data and are used for both TypeScript types and runtime validation:
// The full widget type (matches the database)
export const WidgetSchema = z.strictObject({
id: z.string().uuid(),
stringField: z.string(),
regexField: z.string().regex(/...regex pattern.../),
numberField: z.number().int().positive(),
dateField: z.coerce.date(),
arrayField: z.array(z.string()),
nullableField: z.string().nullable(),
createdAt: z.date(),
updatedAt: z.date(),
deletedAt: z.date().nullable(),
});
// The public widget type (what we return to the client, without internal fields)
export const WidgetPublicSchema = WidgetSchema.omit({
createdAt: true,
updatedAt: true,
deletedAt: true,
});
Notice how internal fields like createdAt, updatedAt and deletedAt
are stripped before sending data to the client. The toWidgetPublic()
function handles this conversion.
Now look at the service functions. Each one follows the same pattern
using the serviceFunction() wrapper:
export async function getWidgets(input, options?) {
return await serviceFunction(options, async ({ transaction }) => {
// Use transaction.widget to query the database
const [widgets, count, totalCount] = await Promise.all([
transaction.widget.findMany({ where, orderBy, skip, take }),
transaction.widget.count({ where }),
transaction.widget.count({ where: { deletedAt: null } }),
]);
return { data: widgets.map(toWidgetPublic), count, totalCount };
});
}
The key thing to understand about serviceFunction() is that it
automatically wraps your code in a database transaction. The
transaction object it gives you is basically the Prisma client, but
scoped to that transaction. You use it like transaction.widget.findMany(...),
transaction.widget.create(...), etc.
Here's what each CRUD operation looks like:
- getWidgets: Fetches a paginated list of widgets. Supports
filtering by
searchStringandids, sorting, and pagination withoffsetandlimit. Always filters out soft-deleted records (deletedAt: null). - getWidget: Fetches a single widget by
id. Throws aWIDGET_NOT_FOUNDerror if it doesn't exist (or is deleted). - createWidget: Creates a new widget using
transaction.widget.create(). - updateWidget: First checks the widget exists, then updates it
using
transaction.widget.update(). Only the fields you pass in the input are updated (partial updates). - deleteWidgets: Performs a soft delete. Uses
transaction.widget.updateMany()to setdeletedAtto the current date on all matching widgets.
Each function also defines its own input schema (what data the caller must provide) and error codes (what can go wrong). For example:
export const GetWidgetErrorCode = {
WIDGET_NOT_FOUND: 'WIDGET_NOT_FOUND',
} as const;
The API layer: tRPC
Now that we have service functions available we can look at the tRPC
layer which exposes them as API endpoints. Open ./src/api/_widgets.ts.
A tRPC router is a collection of procedures. Each procedure is basically an API endpoint. There are two types:
- query: For reading data (like a GET request)
- mutation: For changing data (like a POST/PUT/DELETE request)
Before a procedure runs your code, it goes through a middleware chain that handles authentication and authorization. We have three levels of access:
// Anyone can call this (no login required)
export const publicProcedure = trpcProcedure;
// Must be logged in
export const privateProcedure = trpcProcedure
.use(requireAuthenticationMiddleware);
// Must be logged in AND have the ADMIN role
export const adminProcedure = trpcProcedure
.use(requireAuthenticationMiddleware)
.use(makeRequireRolesMiddleware(['ADMIN']));
In the widgets file, every procedure uses adminProcedure because only
admins can manage widgets. Here's how a procedure is defined:
const getWidgets = adminProcedure
.input(GetWidgetsSchema) // Validate the input using a Zod schema
.query(async function ({ input }) { // This is a query (read operation)
try {
return await widgetService.getWidgets(input); // Call the service
} catch (error) {
throw mapErrorToTrpcError(error, {}); // Map service errors to HTTP errors
}
});
The pattern is always the same:
1. Choose a procedure type (adminProcedure, privateProcedure, etc.)
2. Define the .input() with a Zod schema (tRPC validates it
automatically)
3. Use .query() for reads or .mutation() for writes
4. Call the service function and map any errors
The mapErrorToTrpcError function translates service-level errors into
proper HTTP errors. For example:
// If the service throws WIDGET_NOT_FOUND, the client gets a 404
throw mapErrorToTrpcError(error, {
WIDGET_NOT_FOUND: TrpcErrorCode.NOT_FOUND,
});
At the bottom of the file, all procedures are combined into a router:
export const widgetsRouter = trpcRouter({
getWidgets,
getWidget,
createWidget,
updateWidget,
deleteWidgets,
});
This router is then registered in ./src/api/index.ts as part of the
app router:
export const trpcAppRouter = trpcRouter({
userAuth: userAuthRouter,
appVersions: appVersionsRouter,
health: healthRouter,
users: usersRouter,
widgets: widgetsRouter, // <-- our widgets router
});
Testing your endpoints
All tRPC endpoints are available under the /api path. The URL format
is /api/<router>.<procedure>. For example, to call getWidgets you
would hit /api/widgets.getWidgets.
You can test endpoints using Postman or curl. First, log in to get a
token:
curl -X POST http://localhost:9300/api/userAuth.login \
-H "Content-Type: application/json" \
-d '{"json": {"email": "no-reply+admin@codifly.be", "password": "12345678"}}'
Then use the token to call other endpoints:
curl http://localhost:9300/api/widgets.getWidgets \
-H "Authorization: Bearer <your-token>" \
-H "Content-Type: application/json" \
-d '{"json": {"limit": 10}}'
CI pipeline
Before we start the exercise let's quickly go over our continuous-integration (CI) pipeline. This pipeline is run when you open a merge request and will validate whether your code complies with our style guide (linter configuration) and whether the tests still run successfully. We found this to be a better way to enforce those than using git hooks. Make sure your pipeline is always successful before merging!
You can run the same checks locally before pushing:
npm run validate # runs all checks (deps audit, typecheck, lint, spellcheck)
npm run test # runs the test suite
Creating a new endpoint
Say you want to create a new endpoint to manipulate vehicle objects. (You don’t HAVE to choose vehicles, in fact, please don’t. We’ve already seen enough vehicle endpoints in our lives. For the sake of this example, however, let’s assume it’s vehicles.) Your project manager (PM) has specified that the vehicle should have these fields:
PM defines Vehicle as having
- id
- brand
- model
- mileage
- chassis nr
- color
- owner (relationship to User)
The PM says you should develop an application in which vehicles can be shown (queried), deleted, created and updated. You have to expand the seed project to incorporate endpoints for it.
Now the natural course of action is to copy the files from the widget endpoint and adapt them.
So these are the steps you can follow with little details. You'll have
to figure out the specifics yourself. Of course you can peek at the
widgets related code for help and what structure to adhere to.
- Add the Prisma model in
src/data/codegen/schema.prisma. Define yourVehiclemodel with all the fields, relationships (likeownerpointing toUser) and the standardcreatedAt,updatedAt,deletedAtfields. Look at theWidgetmodel for the pattern. - Run
npm run codegen:data. This single command validates your Prisma schema, regenerates the Prisma client (so TypeScript knows about your new model), and auto-creates a SQL migration file by comparing the old and new schema. You'll find the generated migration in themigrationsfolder. - Create the service file at
src/services/vehicles.ts. This is where the bulk of the work goes: - Define your Zod schemas (
VehicleSchema,VehiclePublicSchema, etc.) - Define input schemas for each operation (
GetVehiclesSchema,CreateVehicleSchema, etc.) - Define error codes (e.g.
VEHICLE_NOT_FOUND) - Write the service functions (
getVehicles,getVehicle,createVehicle,updateVehicle,deleteVehicles) usingserviceFunction()and the Prisma transaction - Copy from
src/services/widgets.tsand adapt! - Create the tRPC router at
src/api/_vehicles.ts. Define a procedure for each operation, pick the right access level (publicProcedure,privateProcedure,adminProcedure) and map errors appropriately. - Register the router in
src/api/index.tsby importing yourvehiclesRouterand adding it to thetrpcAppRouter. - Test your endpoints using Postman or curl to make sure everything works.
- The Good Developer also writes tests, otherwise Arvid will cry like the time he thought his chips got stolen.
(see
testsfolder).
Now go forth and create some fabulous software, Good Developer!
Archive: Old seed (Sequelize + REST + GraphQL)
The section below describes the previous seed architecture. It is kept for reference in case you encounter older projects that still use the Sequelize + REST + GraphQL stack. Or you're a very fast learner and want to try the same exercise but with the old stack. In that case, you just scroll back in the commits until just before the tRPC refactor was merged. The commit you need:
c51a762d11 jul 2025.
Back end (old)
The Codifly back end consitutes a server application which can:
- Connect to a POSTGRES database using Sequelize ORM
- Handle HTTP requests (REST)
- Handle GraphQL requests
- Perform business logic
- Model business objects (e.g. Users, Vehicles, Accounts, ... whatever your application is about)
The architecture of a Codifly back end is a layered cake (and Arvid LOVES cake). Here are the layers:
- Service layer: This layer accesses the database and performs business logic
- Rest layer: This layer handes HTTP REST requests. It does authentication (who is connecting?), authorisation (does the user have permission), validation (is the request correctly formulated?)
- GraphQL layer: This layer handles GraphQL requests. This layer is thin and just translates the GraphQL requests into REST requests.
The advantage of this architecture is that the Codifly server can handle both REST and GraphQL requests. As a standard, GraphQL is preferred whenever possible because of the advantages of GraphQL. In other cases, like a file upload endpoint, no GraphQL endpoint is constructed and only a REST endpoint is implemented.
Each layer requires files to house their logic but also other files that contain definitions and types. In this guide we will move from the bottom (service) to the top (GraphQL) to learn about all of these files. This project contains an example business object called a 'widget'. You can use the code associated with this widget to copy from and for the guide we will use it as an example.
Front end (old)
The Codifly front end is a React client side rendering single page application (or CSR SPA for short) and/or React native mobile app using Expo and has the following features:
- out-of-the-box authentication flow that integrates with our API
- data fetching and global state management using Apollo Client
- e2e tests using Cypress
- (web) development server and production build using Vite
- (web) ready-to-use components built on the Mantine component library
Model definition and ORM library configuration (old)
Business objects are defined in the ./src/data folder. Open
_widget.ts. You will find object definitions in the format that our
ORM library 'sequelize' requires. Sequelize translates to SQL commands
under the hood and makes the database much easier to use from our
application. We defined a 'Widget' type and a 'WidgetModel' type.
'WidgetModel' can be used when manipulating widgets using the database.
The 'Widget' type can be used if you are only interested in the fields
which correspond directly to database columns.
The file is built up like this:
- Type definitions
- Sequelize definitions for the 'widgets' model with definitions of all the columns/fields.
- Sequelize definitions for the relationships between models (helps sequelize construct SQL JOIN's under the hood)
If you want to make your own endpoint (for example about vehicles) you'd
need to create a _vehicle.ts file in the data folder and define your
model/table.
When you define a new model you also need to add a migration file to the
migration folder. Check out
./migrations/202301311035-createWidgetTable.ts file. In this file
sequelize is used to actually create the SQL code which contains the
CREATE TABLE command. You will find that there is some code duplication
between the model file in the data folder and the table definition in
the migration file. When making a new endpoint, design your model first
and copy the field definitions into your migration file after.
Also you might want to add some example data to the database by adding
some code to the ./migrations/seeds/index.ts file. Use what's already
there as an example. We have two example widgets.
The data layer is configured using the code in the ./src/data/index.ts
file. This just sets up the connection between sequelize and the
database and loads all the models. Open this file and look at how the
Widget models are added.
So adding a new model requires this:
- Add a model in
./src/datafolder (e.g._vehicles.ts) - Add migration in
./migrationsfolder, make sure to include a timestamp because the migrations are executed IN ORDER. (e.g.20230207-createVehiclesTable.ts) - Optionally add some seed data by changing the
./migrations/seeds/index.tsfile. - Update the
./src/data/index.tsfile to make sure your new model is known to sequelize
Business logic and the service layer (old)
The ./src/services folder contains TS files that contain functions
that perform operations on models. We like to organise all the functions
associated with a single model into a single file. Check out
./src/services/widgets.ts file.
You will find functions there that perform basic operations such as
searching for widgets, adding widgets, deleting widgets and patching
widgets. In all of these cases sequelize is used to manipulate the
database. Read the comments in the file and make sure you understand the
basics of sequelize. Mind the presence of the transaction object which
is always passed as a parameter. This ensures that the queries are run
in a database
transaction.
When making your own endpoint associated with a model (e.g. vehicles)
you will want to create a vehicles.ts file containing all the services
related to vehicles. Make a copy from the widgets.ts file to get you
started if you want.
If you want to see an example of services containing a lot of business
logic have a look at users.ts file in the services folder. But don't
get too confused. The widget service purposely contains little
business logic. But remember that the service layer is the right place
to do complex calculations and logic.
The REST layer (old)
Now that we understand there are service functions available to us we
can look at the REST layer in which we can call these functions. To help
understand this layer open the file ./src/index.ts where the entire
server is constructed.
Our REST layer is built on Koa. Similar to Express, it uses middleware functions to transform a request into an eventual response. These functions can contain but are not limited to the following behavior: * manipulate the current context (e.g. set a response body) * perform some side effects (e.g. logging) * call the next middleware (maybe even use its response) * prevent the next middleware from being called (e.g. by throwing an error)
We have a number of custom middlewares that eventually culminate in the
handler middleware. This is a function provided to us by the Codifly
api
library
(not to be confused with the api project inside the monorepo) and makes
things easy for us by allowing easy definition of:
* a schema for input validation
* a function that performs the logic for the endpoint (mostly calling
the service layer and setting a response)
* a mapper object that maps thrown ServiceError instances to an
appropriate HTTP status code
You can also test your REST endpoints by using Postman (if you prefer
GUI) or curl (if you prefer CLI). To do a login, for example, you can
use postman to send a POST request to
localhost:9300/api/account/login. (Run the server first!) In the body
use this JSON object:
{
"email": "no-reply+admin@codifly.be",
"password": "12345678"
}
You should receive a login auth token. You can now call other endpoints
by using an Authorization header with the value Bearer <your token>,
such as localhost:9300/api/widgets?searchString=beer.
The GraphQL layer (old)
A short intro
First you need to know about GraphQL. It is a query language or language specification to query servers. GraphQL allows you to precisely query the data you need without the need for multiple (slightly) different API endpoints, which gives it an advantage over REST. Queries that are executed are validated against the GraphQL schema. For more info on how GraphQL works and examples, check out the official learning docs as they are far better at explaining the fundamentals than us (they are giants after all).
The adoption of GraphQL also allows us the use the powerful
apollo-server and
apollo-client
libraries that provide us with extra features like aggregating multiple
calls into one request to save round trips, data caching and global
state management.
GraphQL's declarative query language also includes a type system with which you can define the data types of of your GraphQL objects. However, this leaves us in a bit of a pickle. We already have a type system, namely TypeScript. Now we have to define our types both in GraphQL and in TypeScript land :(.
But what if there was a way to define your GraphQL types using TypeScript itself? There is! We use TypeGraphQL. This allows us to use decorators on TypeScript class properties that will be used by TypeGraphQL to generate the GraphQL types and resolvers.
That still leaves us with a second pickle on the front end. Now we have to copy the type definitions there and ensure they match or GraphQL schema as well as create our own type definitions for our client documents.
Luckily, there's also a solution here called
graphql-codegen. It can
generate those types for us by checking our GraphQL schema and even
generates Apollo data fetching functions based on our client documents.
Remember to always run the update-graphql-code npm script after
changes to the schema or client documents.
Before we learn how data is fetched we must learn about resolvers, queries and mutations.
- Resolvers: A resolver is a function that's responsible for populating the data for a single field in your schema (Apollo docs)
- Query: Operation that retrieves data and has no side effects (no changes to the state, e.g. database)
- Mutation: Operations that mutate the state of the server and return some data. By definition it has side effects.
- Subscription: Operations that allow for real-time data updates from the server, often through websocket connections
We define the resolvers in their own file, just like the types. Check
out ./src/graphql/resolvers/widget.ts. Each file in the resolvers
directory contains at least a single Resolver class. This class has
multiple functions some of which are annotated as queries and others as
mutations. In most cases, the resolver resolves (hehe) to a call to our
REST layer using a custom fetcher function that is added on our
context by one of the aformentioned middleware functions.
The wilderness must be explored!
We could also use Postman or curl to test our GraphQL operations, but
Apollo provides with the far superior Apollo Studio Explorer (previously
GraphQL playground should you be more familiar with that). This is an
IDE that you can open in your browser by navigating to the /graphql
endpoint of your back end server (this should be
http://localhost:9300/graphql). Do this right now (or get another
coffee first. I'm a guide, not a cop).
The UI is fairly intuitive but I'll quickly go through it. On the left is the Documentation panel where you can view all the operations and types that are defined in your GraphQL schema. In the middle you'll find a textarea for entering your operation as well as a bottom section for inputting your variables and adding any headers to your request. On the right you will see your response, which will be empty right now. On the far left you will see that you are currently on the Explorer page. You can click the button above, which will show the entire schema reference. Some features of the studio require a (paying) account, but for simple testing and debugging purposes we will not need that.
Let's test out the explorer by first executing the login mutation, which will return an authorization token that we can use for further queries.
mutation login {
login(
input: { email: "no-reply+admin@codifly.be", password: "12345678" }
) {
authToken
}
}
You can type this query manually in the text box on the right or have it generated using the Documentation panel on the left and then inputting the variables in the corresponding section. Press the Play button to execute the mutation.
This mutation returns an authToken. From the right pane, copy the auth
token to your clipboard. Open a new tab in the IDE and set up the
getWidgets query:
query {
getWidgets {
count
totalCount
data {
id
stringField
regexField
numberField
dateField
arrayField
nullableField
}
}
}
Open the Headers section at the bottom, add a new header and select
Authorization. In the value field enter Bearer, a space, then your
token.
Now any further requests are authorized and the REST layer will serve
them without error. Because we make a REST request in the resolver using
our special ctx.fetcher function the authorization header from the
GraphQL request is copied over to the request made by ctx.fetcher.
Feel free to use the explorer to explore even further.
So to make a GraphQL endpoint we usually need to
- Define types in a file in the
./src/graphql/typesfolder - Define resolvers in a file in the
./src/graphql/resolversfolder.
The rest takes case of itself. Also, please remember that the way we set up the stack requires that the GraphQL layer is thin and just makes another request internally to the REST layer. This is important because the rest layer is where authentication, authorization and validation is processed.
Creating a new endpoint (old)
Say you want to create a new endpoint to manipulate vehicle objects. Your project manager (PM) has specified that the vehicle should have these fields:
PM defines Vehicle as having
- id
- brand
- model
- mileage
- chassis nr
- color
- owner (relationship to User)
The PM says you should develop an application in which vehicles can be shown (queried), deleted, created and updated. You have to expand the seed project to incorporate endpoints for it. We will assume all of these endpoints use GraphQL.
Now the natural course of action is to copy the files from the widget endpoint and adapt them.
So these are the steps you can follow with little details. You'll have
to figure out the specifics yourself. Of course you can peek at the
widgets related code for help and what structure to adhere to.
- Create the business object (Sequelize model) in
src/data - Create the migration in
migrationsfolder - Ensure Sequelize knows the new model (
src/data/index.ts) - Set up the service layer
- Set up the REST layer (don't forget the API validation)
- Add the endpoint routes (
src/rest/index.ts) - Test your REST endpoints using the tool of your choice
- Set up the GraphQL layer (type definition and resolver) in
src/graphql - Ensure the new type definition and resolvers are included in the
GraphQL schema
src/graphql/index.ts - Test your new GraphQL operations using the Apollo Studio Explorer
- The Good Developer also writes test, otherwise Arvid will cry like the time he thought his chips got stolen.
(see
testsfolder).
You will find that this process contains a lot of duplicated definitions. The GraphQL types need to map exactly to the sequelize types and it's easy to make little typos or type mistakes. The seed is regularly maintained so in the future this might be improved.
Now go forth and create some fabulous software, Good Developer!