0.3.0 repo

Version 0.3.0 updates the stack to use a Go backend in a monolithic structure. This is in contrast to version 0.2.0’s multi-server default installation utilizing Docker. 0.3.0 lessens the scope of Docker and microservices, in favor of a more unified server based on Go and Protobufs. As well, the frontend uses a revised implementation of RTK-Query in order to streamline the development and usage of API endpoints: defined Protobufs are used to generate an OpenAPI spec document, which in turn is used to auto-generate RTK-Query hooks for React.

Some of the information here may be the same or similar to 0.2.0, but with revisions and updates where necessary.

🔗 Create an Awayto Project

🔗 Installation

Get the code from the repo

git clone https://github.com/keybittech/awayto-v3

Create an .env file

cp .env.template .env

At the very least, these properties should be set as per your project:

  • PROJECT_PREFIX - identifier used in resource creation, best to not use any special characters out of precaution as the identifier is used across the stack for various purposes with varying limitations
  • PROJECT_TITLE - literal text used for headers/titles
  • DOMAIN_NAME - deployed domain name
  • ADMIN_EMAIL - used for certbot registration

🔗 Run the Project

Run Docker containers

Docker is used for Redis, Postgres, Coturn, and Goetnberg.

make docker_up

Build the project

make build

Develop the landing site

make landing_dev

Develop the app front end

make ts_dev

Develop the app back end

make go_dev

If the ts_dev server is also running, the go_dev server will proxy that for local development, instead of the front end static build folder.

Visit the site

Access the project at https://localhost:7443.

🔗 Software

In no particular order, the following lists the third-party software used in Awayto, along with their key features and a primary source for usage in the system:

Technology Description Source
Make Task running, building, deploying Makefile
Shell Deployment install/configure scripts /deploy/scripts
Docker Container service, docker compose, supports cloud deployments /deploy/scripts/docker
Postgres Primary database /deploy/scripts/db
React Front end TSX components and hooks built with a customized CRACO config /ts
ReduxJS Toolkit React state management and API integrating with Protobufs /ts/src/hooks/store.ts
PNPM Front end package management /ts/package.json
Let’s Encrypt External certificate authority
Hetzner Cloud deployment variant /deploy/scripts/host
Keycloak Authentication and authorization, SSO, SAML, RBAC /java
Redis Sessions & caching /go/pkg/clients/redis.go
Hugo Static site generator for landing, documentation, marketing /landing
DayJS Scheduling and time management utilities /ts/src/hooks/time_unit.ts
Material-UI React UI framework based on Material Design /ts/src/modules
Coturn TURN & STUN server for WebRTC based voice and video calling /deploy/scripts/turn
WebSockets Dedicated websocket server for messaging orchestration, interactive whiteboard /go/pkg/clients/sock.go

🔗 Creating a Feature

A feature can mean a lot of different things. Here we’ll give a quick rundown of what you might need to do when developing a new feature, from the database to the user interface.

Perhaps the most important aspect of any new implementation is the underlying data structure. We can either first create a Protobuf definition or a Postgres table, depending on our needs. Generally, both will be necessary, as the Protobuf will be used to define APIs and represent data relating to the Postgres table.

🔗 A Table with a View

All database scripts for the platform are stored in the /deploy/scripts/db folder. They will be run the first time the db container runs while using an empty volume. For example, when you run the first time developer installation, a docker volume is created and, since it is empty, the Postgres docker installation will automatically run our database scripts for us. The scripts are named alphanumerically and will be run in that order.

New database scripts can be deployed in various ways. After running the installation, you will have a Postgres container running on your system. You can log into the running Postgres instance by using the make db CLI command for the dev DB, or make host_db for a deployed setting.

make db

# run SQL scripts
CREATE TABLE ...

Or we could do it the old fashioned way.

docker exec -it $(docker ps -aqf "name=db") /bin/sh

# connected to the container
su - postgres
psql

# connected to Postgres
\c pgdblocal -- this is the default dev db name

# run SQL scripts
CREATE TABLE ...

To connect to a deployed db:

make host_db

# run SQL scripts
CREATE TABLE ...

As an example, we’ll setup a basic Todo feature in our app. We’ll make a new file in the scripts folder, /deploy/scripts/db/c1-custom_tables.sh. It’s a shell file because this is the chosen method to enact the auto deployment when the Postgres container starts up for the first time. We’ll put the following in our file, as well as run the SQL statement as shown in one of the methods above. Auditing columns are included on all tables.

#!/bin/bash

psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-'EOSQL'

  CREATE TABLE dbtable_schema.todos (
    id uuid PRIMARY KEY DEFAULT uuid_generate_v7(),
    task TEXT NOT NULL,
    done BOOLEAN NOT NULL DEFAULT false,

    -- Auditing Columns:
    created_on TIMESTAMP NOT NULL DEFAULT TIMEZONE('utc', NOW()),
    created_sub uuid REFERENCES dbtable_schema.users (sub),
    updated_on TIMESTAMP,
    updated_sub uuid REFERENCES dbtable_schema.users (sub),
    enabled BOOLEAN NOT NULL DEFAULT true
  );

EOSQL

You’ll notice we nest our tables in the schema dbtable_schema. There is also dbview_schema and to this we will add a simple view to wrap our usage. Views are the primary way data will be queried when we get to creating our API functionality. We’ll create another new file /deploy/scripts/db/c1-custom_views.sh with our view. Remember to also run the SQL script in the db container as described previously.

#!/bin/bash

psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-'EOSQL'

  CREATE
  OR REPLACE VIEW dbview_schema.enabled_todos AS
  SELECT
    id,
    task,
    done,
    created_on as "createdOn",
    row_number() OVER () as row
  FROM
    dbtable_schema.todos
  WHERE
    enabled = true;

EOSQL

A few important callouts about view structures and project conventions:

  • Generally views represent active or enabled records, so we name the view as such, enabled_todos. In some situation where we need to hide a record from the set for some reason (soft-delete, data retention, etc.), we can make use of the enabled flag.
  • Views are the primary transition layer between the database schema and the application layer. Database naming conventions follow snake-case naming, while the application uses camel-case. This conversion occurs in the view where applicable.
  • A row number can be added for ordering within the set where needed.

🔗 Protobufs

Protobufs are central to the platform’s API. By creating Protobufs, we can auto generate Go structs for the back end, and an OpenAPI-based set of hooks for React using RTK-Query. To add Protobufs to the project, add proto files to the protos folder in the main directory. New Protobuf definitions can be added to the proto folder in the main directory. Upon building, auto-generated elements will be placed into the /ts/hooks folder (in the case of the RTK-Query API), and /go/pkg/types (in the case of Go structs).

Review existing Protobufs in the proto folder.

For our example, we’ll create a proto that looks something like this:

message ITodo {
  string id = 1;
  string task = 2;
  bool done = 3;
  string createdOn = 4;
}

Note: Any time you add a new file or update to the core package, it’s generally a good idea to fully restart any running dev servers when developing the API or UI.

🔗 Defining API Endpoints

Protobufs are used to define APIs, using the service and rpc constructs, along with custom- and Google-based protos for HTTP related features. For each rpc defined in a service proto, a corresponding handler file needs to be created in a /go/pkg/handlers file. See existing handlers for examples.

Existing protos use a standard method of defining service RPCs and then the input/output messages for each individual RPC. Care must be taken when designing an API in such a way that it doesn’t become too large and cumbersome. Input and output messages should be simple, containing a few properties each, with specific purposes in mind.

External Properties

Use of Google-based proto annotations can be researched online, as well as studied in existing protos, and include:

  • google/protobuf/struct.proto
  • google/api/annotations.proto
  • google/api/field_behavior.proto

Custom Properties

Custom properties are further defined in the util.proto:

cache: By default, the server will cache all GET requests for 180 seconds.

  • DEFAULT - Default behavior.
  • SKIP - Never cache endpoint responses.
  • STORE - Permanently cache response in Redis, and use that on subsequent GETs.

cache_duration: Seconds. Used to override default 180 second cache duration for GET requests.

throttle: Todo. Throttles endpoint for 10 requests per n seconds.

siteRole: Limit access to a particular site role. Check util.proto for current values.

Following our example, we might create a service for our Todos like this:

service TodoService {
  rpc PostTodo(PostTodoRequest) returns (PostTodoResponse) {
    option (google.api.http) = {
      post: "/v1/todos"
      body: "*"
    };
  }

  rpc GetTodos(GetTodosRequest) returns (GetTodosResponse) {
    option (google.api.http) = {
      get: "/v1/todos"
    };
  }

  rpc DeleteTodo(DeleteTodoRequest) returns (DeleteTodoResponse) {
    option (google.api.http) = {
      delete: "/v1/todos/{id}"
    };
  }
}

And its corresponding input/output objects:

message PostTodoRequest {
  ITodo todo = 1 [(google.api.field_behavior) = REQUIRED];
}

message PostTodoResponse {
  string id = 1 [(google.api.field_behavior) = REQUIRED];
  bool done = 2 [(google.api.field_behavior) = REQUIRED];
}

message GetTodosRequest {}

message GetTodosResponse {
  repeated ITodo todos = 1 [(google.api.field_behavior) = REQUIRED];
}

message DeleteTodoRequest {
  string id = 1 [(google.api.field_behavior) = REQUIRED];
}

message DeleteTodoResponse {
  string id = 1 [(google.api.field_behavior) = REQUIRED];
}

🔗 Handling an API

Each RPC we define in a Protobuf will be auto-discovered on build of the Go server. A corresponding handler must be created in go/pkg/handlersto handle the RPC. When writing handler functions, the API provides functionality from a number of built-in clients. Clients can be extended by adding to go/pkg/clients and should come with a corresponding interface in go/pkg/clients/interfaces.go. This version supports an LLM service, the database, Redis, Keycloak, and our WebSocket server. Clients are attached to the Handlers class and can be used like h.Database, h.Redis, etc., as seen below.

At this point, we’ve set up some data structures around our feature, including some endpoints and params we can interact with. To handle the endpoint from the API context, we’ll make a new file /go/pkg/handlers/todo.go. If we don’t add a handler, a warning will be given during startup of the Go server.

An example of a handler we could create, based on the Protobuf service definition we defined earlier:

func (h *Handlers) PostTodo(w http.ResponseWriter, req *http.Request, data *types.PostTodoRequest) (*types.PostTodoResponse, error) {
	session := h.Redis.ReqSession(req)

	var id string
	err := h.Database.Client().QueryRow(`
		INSERT INTO dbtable_schema.todos (task, done, created_on, created_sub)
		VALUES ($1, FALSE, $2, $3::uuid)
		RETURNING id
	`, data.GetTodo().GetTask(), time.Now().Local().UTC(), session.UserSub).Scan(&id)

	if err != nil {
		return nil, util.ErrCheck(err)
	}

	return &types.PostTodoResponse{Id: id, Done; false}, nil
}

🔗 Creating a Component

React components live inside the /ts/src/modules folder. Begin by creating a new file in /ts/src/modules/example/Todos.tsx. The UI is implemented primarily using React functional components. Here we’ll provide a sample component using basic React constructs with Material-UI:

import React, { useState } from 'react';
import TextField from '@mui/material/TextField';

import { siteApi, useUtil } from 'awayto/hooks';

export function Todos (): React.JSX.Element {

  const { setSnack } = useUtil();
  const [postTodo] = siteApi.usePostTodoMutation();

  const [todo, setTodo] = useState({
    task: ''
  });
  
  const handleSubmit = () => {
    const { task } = todo;

    if (!task) {
      setSnack({ snackType: 'error', snackOn: 'You must have something todo!' });
      return;
    }
    
    postTodo({ task });
  }

  return <>
    <TextField
      id="task"
      label="Task"
      name="task"
      value={todo.task}
      onKeyDown={e => {
        if ('Enter' === e.key) {
          void handleSubmit();
        }
      }}
      onChange={e => setTodo({ task: e.target.value })}
    />
  </>;
}

export default Todos;

🔗 Development

Like any full-stack implementation, Awayto has its own concept of the file system, modularization, and deployments. It is built as a monorepo, but its function depends on multiple containers working together. This is the case for both local and deployed environments. Beyond this, we have to consider the needs of a production environment versus a development environment. The CLI bridges these concerns by being a unified interface to manage both the development and deployment of the platform.

The root url / of the application will serve the built Hugo project. Hugo is used because it is a lightweight static site generator. Ultimately, we want a separation between the javascript application we’re creating and a landing/marketing page containing other beneficial information. Using Hugo as a first point of entry means the end-user experiences extremely fast response times on their first visit to the site. Later on, when they go to access the application at the /app route, and need to download the related javascript, such can be done in an incremental fashion; the user isn’t innundated with downloading resources on their first visit.

🔗 Application Architecture

0.3.0 is built around a monolithic Go server. It serves the HTML, caches requests with Redis, stores data in a Postgres database, acts as a WebSocket server, as well as proxies requests to Docker-managed services Keycloak, Gotenberg, and Coturn. As part of cloud deployment, only HTTP ports 80, 443, and Coturn related ports will be public-facing.

Name Docker Purpose Ports
Go server Custom Go standard lib HTTP server. 80, 443
db x Postgres in its managed alpine container. 5432
redis x Redis in its managed container. 6379
auth x Keycloak in its managed container. 8080, 8443
turn x Coturn in its managed container, using host network to handle port assignments. 3478, 44400-44500 UDP
docs x Gotenberg container for file conversion to PDF. 8000

🔗 Deployment

In version 0.3.0, we continue to rely on Hetzner for self-hosting. However, only a single monolithic server is deployed, as opposed to the distributed server setup in 0.2.0. This greatly simplifies the processes and resources necessary for deployment. The Makefile offers two simple commands for deployment management. Deployed metadata is stored in the deployed folder, once deployment is complete.

  • make host_up - Deploys Hetzner instance based on .env properties.
  • make host_down - Deletes Hetzner instance based on .env properties.

🔗 Guides

Check out these guides for specific information about how lower-level parts of the platform function.

🔗 Sessions

In a handler function, a reference to the session can be retrieved from Redis by passing the current request into the ReqSession function, as seen in Handling an API. This returns a UserSession with the following available properties:

type UserSession struct {
	UserSub                 string   `json:"userSub"`
	UserEmail               string   `json:"userEmail"`
	GroupName               string   `json:"groupName"`
	GroupId                 string   `json:"groupId"`
	GroupSub                string   `json:"groupSub"`
	GroupExternalId         string   `json:"groupExternalId"`
	GroupAi                 bool     `json:"ai"`
	SubGroups               []string `json:"subGroups"`
	SubGroupName            string   `json:"subGroupName"`
	SubGroupExternalId      string   `json:"subGroupExternalId"`
	Nonce                   string   `json:"nonce"`
	AvailableUserGroupRoles []string `json:"availableUserGroupRoles"`
}

Each handler also provides direct access to the standard Go library server Request and ResponseWriter objects. There are a few additional Request context parameters available:

  • LogId - a unique request id.
  • SourceIp - the remote address.
  • UserSession - references the above struct.

🔗 Socket Basics

Real-time communications are supported by a standard Coturn container and a custom WebSocket endpoint as part of the Go API. In the React app, you will find corresponding elements which enable the use of voice, video, and text communications. The tree of our React app is constructed in such a way that, as authenticated users begin to render the layout, a React provider/context instantiates a long-lived WebSocket connection for use anywhere in the app. Using the WebSocket context, we get access to the most basic features of a socket connection, which follows a typical topic pub/sub implementation.

type WebSocketContextType = {
  connectionId: string;
  connected: boolean;
  transmit: (store: boolean, action: string, topic: string, payload?: Partial<unknown>) => void;
  subscribe: <T>(topic: string, callback: SocketResponseHandler<T>) => () => void;
}
  • connectionId: A one-time global identifier for this connection to the socket server. There is currently no tracking for connections across browser tabs; so if you open a new tab, you will get a new connection id, etc.
  • connected: Current connection state.
  • transmit:
    • store: Setting true will store transmitted messages in the database table dbtable_schema.topic_messages.
    • action: The type of message as it pertains to functionality of the socket. For example, when creating a chat you might have an action to signify when users join or leave the chatroom.
    • topic: The channel or room in which messages will be sent.
    • payload: The message being sent. Generally a simple key/value pair.
  • subscribe: Join a user to a specific topic and set up a callback for how messages receipts should be handled on the client. A type can be supplied in order to specify the type of payload that is returned in the callback.

The WebSocket context itself is pretty low level, and there are still some very generic usecases we can cover with high-level abstractions, such as managing client-side connections, disconnections, and user lists. For this we make use of a React hook that is more readily usable, useWebSocketSubscribe. Here is a trivial but complete implementation of the hook to see how it can be used:

import React, { useState } from 
import { useWebSocketSubscribe } from 'awayto/hooks';

declare global {
  interface IProps {
    chatId?: string;
  }
}

export function UserChat({ chatId }: IProps): React.JSX.Element {

  const [messages, setMessages] = useState([])

  // Here we'll instantiate the socket subscription with an arbitrary 'chatId' (which should be the same for all participants), and a very simple payload of { message: string }, which could be any structure necessary depending on the feature being implemented
  const {
    userList,
    subscriber,
    unsubscriber,
    connectionId,
    connected,
    storeMessage,
    sendMessage
  } = useWebSocketSubscribe<{ message: string }>(chatId, ({ timestamp, type, topic, sender, store, payload }) => {
    
    // Received a new message
    const { message } = payload;

    // A single user could have multiple connections,
    // so we need to iterate over their connection ids and then extend our messages collection
    for (const user of userList.values()) {
      if (user.cids.includes(sender)) {
        setMessages(m => [...m, {
          sender,
          message,
          timestamp
        }]);
      }
    }
    
  });

  useEffect(() => {
    // Someone joined the chat
  }, [subscriber]);

  useEffect(() => {
    // Someone left the chat
  }, [unsubscriber]);

  const messageHandler = (message: string) => {
    // To store the message in the database
    storeMessage('stored-message', { message }); // This { message } payload is bound by the type supplied to `useWebSocketSubscribe`

    // Or just send a message between clients
    sendMessage('normal-message', { message }); // It doesn't matter what the type is, but 'normal-message' will be available in the callback of `useWebSocketSubscribe` as `type` for further handling if needed
  }

  return <>
    {/* render the messages */}
  </>
}

There is a lot we can accomplish with the useWebSocketSubscribe and it can be configured for any pub/sub related context. For a look at more advanced uses of the socket, review the Call provider, Text provider and Whiteboard component to see how multiple actions can be utilized more fully, how to handle subscribers, and more.

Less Basic: Socket Authorization and Allowances

The WebSocket protocol does not define any methods for securing the upgrade request necessary to establish a connection between server and client. However, authenticated users will have an ongoing session in our Go API. Therefore we can use it to ensure only authorized users can access the socket by using a ticketing system. Once the user is connected, the socket server can then handle its own requests in seeing which topics the user is allowed to connect to.

In 0.3.0, the WebSocket “server” is fully handled by the /sock endpoint in the Go server, after receiving a /ticket:

  • the browser makes a request to /ticket
  • the browser receives a connectionId:authCode style pairing from /ticket which it uses to make the upgrade request
  • a request is made to the /sock endpoint, configired as an UPGRADE and handled with goroutines
  • the endpoint checks the incoming authCode against what has been stored on the server, expiring the ticket
  • the browser can proceed to send messages using the connectionId, which are then routed internally

After having connected, the client can use the transmit function described previously to send a 'subscribe' action, along with the desired topic. An abstracted example of this process is used in the useWebSocketSubscribe hook.

While subscribing to a topic, the socket server must ensure the user is allowed. This can be done in many different ways depending on the purpose of the socket connection. Currently the only implementation for socket comms is based around the Exchange module, which handles meetings between users. Exchanges are a record representing what happens in a meeting. The meeting itself (date, time, participants, etc.) is a separate record called a Booking. Users joining an Exchange chatroom are required to be related to that booking record id in the database. This is a complex interaction but ensures participants are valid for a given topic. Using the Exchange module as an example, we’ll break down the process here:

  1. The users clicks, for example, a link which redirects them to /app/exchange/:id using a booking id, which routes them to the Exchange component, and pulls out the ID as a parameter.

  2. The Exchange module is wrapped, using either the Text or Call providers, using a relevant topic name and the ID parameter:

<WSTextProvider
  topicId={`exchange/text:${exchangeContext.exchangeId}`}
  topicMessages={topicMessages}
  setTopicMessages={setTopicMessages}
>
  <WSCallProvider
    topicId={`exchange/call:${exchangeContext.exchangeId}`}
    topicMessages={topicMessages}
    setTopicMessages={setTopicMessages}
  >
    <ExchangeComponent />
  </WSCallProvider>
</WSTextProvider>
  1. The text or call provider internally attempts to subscribe to our topicId, e.g. exchange/text:${exchangeContext.exchangeId}. The socket server is responsible for checking the users’s allowances at the moment of subscription. This process is handled in the above referenced endpoint.

  2. User allowances are maintained internally using custom logic pertaining to the related allowance. For example, the function API.Handlers.Database.GetSocketAllowances currently only cares about maintaining Exchange related socket connections, and does so with specific queries checking if the current user is related to the Exchange topic id they want to join. Other allowances would need to extend these queries and be handled in a similar fashion.

  3. A switch handler makes a check to determine if the user has access to the topic id being requested. In the case of the booking/exchange system, this is very basic.

  4. If the handler finds that the user is related to the booking id they are requesting for, the subscribe function continues on with all the wiring up of a user’s socket connection.

  5. Now inside our Exchange component, we can tap into the text or call contexts.

🔗 Voice, Video, and Text

Communcations functionalities are core to the system, and the platform offers some built-ins to make real-time applications easier to implement. These come in the form of React contexts, so you can build components that work across the application, and aren’t tied to any pre-existing components. Unlike the base web socket context, which wraps the application at a high level, these built-ins should be used where needed by wrapping your desired components in the given context’s provider. Once familiar with their usage and purpose, it is encouraged to dive deeper by customizing the providers themselves, as they can extend the look and function of the components used internally.

Text Messaging Context

type WSTextContextType = {
  wsTextConnectionId: string;
  wsTextConnected: boolean;
  chatLog: React.JSX.Element;
  messagesEnd: React.JSX.Element;
  submitMessageForm: React.JSX.Element;
}
  • wsTextConnectionId: The connection id of the underlying socket.
  • wsTextConnected: The connection status.
  • chatLog: A styled element containing the chat logs.
  • messagesEnd: A helper element to “scroll-to-bottom” of the chat log where needed.
  • submitMessageForm: An input box to submit a message to the channel.
Text Provider Usage

As mentioned, to utilize the context, we need to wrap our component with the context’s provider, WSTextProvider. Upon doing so, the channel can be configured with a unique topic id, which signifies the “room” that our users are joining. As well, we maintain the set of topic messages outside of the provider, so that they can be passed around as necessary to other components. For example, if you nested the call provider inside the text provider, and both of them shared the same topic messages, this would enable the chat components to say things like “John joined the call.”

import React, { useState, useContext } from 'react';

import Grid from '@mui/material/Grid';

import { SocketMessage } from 'awayto/core';
import { useComponents, useContexts } from 'awayto/hooks';

function ChatLayout(): React.JSX.Element {
  
  const {
    chatLog,
    messagesEnd,
    submitMessageForm
  } = useContext(useContexts().WSTextContext) as WSTextContextType; // Context types are declared globally and don't need to be imported

  return <>
    <Grid container direction="column" sx={{ flex: 1 }}>
      <Grid item sx={{ flex: '1', overflow: 'auto' }}>
        {chatLog}
        {messagesEnd}
      </Grid>

      <Grid item pt={1}>
        {submitMessageForm}
      </Grid>
    </Grid>
  </>;
}

export default function GeneralChat(): React.JSX.Element {

  const { WSTextProvider } = useComponents();

  const [topicMessages, setTopicMessages] = useState<SocketMessage[]>([]);

  return <>
    <WSTextProvider
      topicId={'general-chat'}
      topicMessages={topicMessages}
      setTopicMessages={setTopicMessages}
    >
      <ChatLayout />
    </WSTextProvider>
  </>;
}

Call Context

The call context sets up the elements needed to manage a WebRTC voice and video call. The socket connection is used internally to route messages to peers in order to setup a peer-connection using the Coturn server. From there, users are directly connected using the WebRTC protocol. The props of the built-in context allow for the construction of a voice and video chatroom components.

type WSCallContextType = {
  audioOnly: boolean;
  connected: boolean;
  canStartStop: string;
  localStreamElement: React.JSX.Element;
  senderStreamsElements: (React.JSX.Element | undefined)[];
  setLocalStreamAndBroadcast: (prop: boolean) => void;
  leaveCall: () => void;
}
  • audioOnly: Once a call is started, this flag can be used for various layout needs.
  • connected: The current status of the call.
  • canStartStop: This prevents repeated start/stop attempts while a call is already being started/stopped. An empty string means a call is in the process of starting; a value of 'start' implies there is no ongoing call, 'stop' means a call has started and can be stopped (using leaveCall).
  • localStreamElement: A component for the current user’s own video rendering area.
  • senderStreamsElements: An array of video components for each peer in the call.
  • setLocalStreamAndBroadcast: A handler allowing the current user to join the call. Passing true will allow video to be sent. Passing nothing or false will only join the call with audio.
  • leaveCall: A handler to leave a call if currently connected.
Call Provider Usage

Much the same as the texting context, we must wrap our call layout using the call provider, WSCallProvider. Then we can lay out the components as needed for our call.

import React, { useState, useContext } from 'react';

import Grid from '@mui/material/Grid';
import Button from '@mui/material/Button';

import { SocketMessage } from 'awayto/core';
import { useComponents, useContexts } from 'awayto/hooks';

function CallLayout(): React.JSX.Element {

  const {
    audioOnly,
    connected,
    localStreamElement,
    senderStreamsElements,
    setLocalStreamAndBroadcast,
    leaveCall
  } = useContext(useContexts().WSCallContext) as WSCallContextType; // Context types are declared globally and don't need to be imported

  return <>
    <Grid container direction="column" sx={{ backgroundColor: 'black', position: 'relative', flex: 1 }}>
      {localStreamElement && <Grid item xs={12}
        sx={{
          position: senderStreamsElements.length ? 'absolute' : 'inherit',
          right: 0,
          width: senderStreamsElements.length ? '25%' : '100%'
        }}
      >
        {localStreamElement}
      </Grid>}
      {senderStreamsElements.length && senderStreamsElements}
    </Grid>

    {connected && <Button onClick={() => leaveCall()}>
      Leave Call
    </Button>}

    {!connected || audioOnly && <Button onClick={() => setLocalStreamAndBroadcast(true)}>
      Join with Voice & Video
    </Button>}

    {!connected && <Button onClick={() => setLocalStreamAndBroadcast(false)}>
      Join with Voice
    </Button>}
  </>;
}

export default function GeneralCall () {

  const { WSCallProvider } = useComponents();

  const [topicMessages, setTopicMessages] = useState<SocketMessage[]>([]);

  return <>
    <WSCallProvider
      topicId={'general-call'}
      topicMessages={topicMessages}
      setTopicMessages={setTopicMessages}
    >
      <CallLayout />
    </WSCallProvider>
  </>;
}

For our actual implementation and usage of the call and text providers, check out the Exchange module. There, we combine voice, video, text, as well as a collaborative socket driven canvas.

🔗 Dynamic Component Bundling

As a project grows larger, there should be some control around how the project is bundled and served to the client. In our case, Awayto utilizes React as a front-end library to construct and execute our component library. As part of React, we get access to the Suspense and Lazy APIs, which enable us to gain that bit of control we need.

With modern JavaScript bundlers, we can make use of tree-shaking and code-splitting to output our project into file chunks. This ultimately means a client only downloads components that it needs to render in real-time. And as most components will be small in nature, these added requests aren’t too big of a deal in the grand scheme of load times.

To accomplish this, we use a mixture of some build-time scripting,the JavaScript Proxy API, as well as the aforementioned React APIs, Suspense and Lazy.

  • As part of our CRACO configuration, the function checkWriteBuildFile parses the structure of our /ts/src/modules folder, and writes a manifest for all the components and contexts available to us. This manifest is stored in /ts/src/build.json.
  • In our series of React hooks, useComponents and useContexts use the manifest to load files when needed, and keep a local cache of downloaded components.
  • By using the Proxy API, our hook allows us to download/use a component just by accessing it as a property of useComponents (or useContexts). useContexts will pick up any file ending with Context, so beware.
  • If a component doesn’t exist, an empty div will be rendered instead. With advanced usage, we can feature-lock components from being used without running into compilation errors in the event a component isn’t in the file system.
import { useComponents } from 'awayto/hooks';

export default SomeComponent() {
  const { MyComponent, DoesntExist } = useComponents();

  return <>
    <MyComponent /> {/* All good! */}
    <DoesntExist /> {/* This will just render an empty <div /> */}
  </>
}

As a result of this method, we incur some side effects. For example, you will notice that at no time is “MyComponent” imported anywhere. What this means is we lose static analysis when it comes to build time type checking, error handling, and so forth. The builder knows that we have a module folder full of React components, and it will build each one into its own chunk as necessary. However, it won’t know how files are interconnected and used within one another. As we use TypeScript, this means we need a way to share types across the project. The solution is to globally extend our component properties interface whenever and where-ever we are making the component, which is seen in many of the existing components. Random example:

// ...
declare global {
  interface IProps {
    pendingQuotesAnchorEl?: null | HTMLElement;
    pendingQuotesMenuId?: string;
    isPendingQuotesOpen?: boolean;
    handleMenuClose?: () => void;
  }
}

export function PendingQuotesMenu({ handleMenuClose, pendingQuotesAnchorEl, pendingQuotesMenuId, isPendingQuotesOpen }: IProps): React.JSX.Element {
// ...

Now when we go to implement this component in other components, using the useComponents hook, we can properly utilize the prop types. Beware of using the same prop type names in different components, as we are extending a global interface.

An arguably large benefit of all of this is that our first-time visit to the site more or less incurs the same, small, package download size. We don’t bundle all our components into one large file, and they are loaded asynchronously on-demand. So the initial app download size will remain small (less than 1 MB!; mostly the styling libraries) even with thousands of components in the system. Whether or not this is useful to your specific project is for you to determine; the use of useComponents or useContexts isn’t compulsory.