OpenGraph Image Generators

Do it yourself. All of these tools uses Chrome Headless, so you can just use:

Related tools:

A note on speed: Generating images via Chrome Headless can take some seconds. Unless you are pre-generating them, the delay in making them can cause them not be be shown. Seems the behavior depends on the app. Telegram and WhatsApp start loading them when the link is pasted, even if the message is not submitted. If you submit the link before the image is ready, Telegram seems, usually, to attach the social card later once it is ready. WhatsApp Web however will not – if the link is sent early, there will be no social image.

The React SSR Rehydrate Markup Matching Issue

The problem is described at length here.

The short version is this: When rendering on the server and calling rehydrate on the client, react expects the markup generated on the server and the render-pass on the client to produce the same results.

This can be problematic if the server lacks certain information that you have available on the client, and needs to use a fallback value, while on the client-side you use the correct value. Common examples:

  • The server does not know if the user is logged in, but the client does.
  • The server does not know the device size, but on the client you render different markup for different devices.

What happens if the markup does not match? There are two options:

  1. React will show a warning in the console, but will otherwise “fix the differences”.
  2. React will not fix the differences, and your markup will be a mix of the one rendered on the server and the one rendered on the client. This is bad. It seems to happen a lot due to className not updating.

What are our options?

  1. Render both versions on the server. This is what fresnel does.
  2. Render a second time on the client after rehydrate. This is described by the React rehydrate docs. This involves making sure, one way or another, that the first time a component is rendered (during rehydrate) it will use the same values as on the server, and will then trigger a useEffect or componentDidUpdateto re-render with the client-side value. It may be desirable to use useLayoutEffect.
  3. Wait for Client Hints to be supported by browsers.

Scripting Video

I’ve recently started to think about how to generate videos using code. Here are some links I found.

MoviePy

Python based library to script videos. Works nicely and cleverly wraps the imagemagick and ffmpeg binaries. However, your quickly down to manipulating image bytes manually once you want to do anything beyond what they ship – or you find some other library to generate the image data you want. In the end, this is a helper layer on top to generate frames being sent to ffmpeg.

Scripting Premiere or After Effects

This has a lot going for it. The experience working with ExtendScript is not great, but it is not awful either – you can now even right and debug in VSCode. Obviously AfterEffects lets you do a lot of powerful stuff. But creating a longer sequence can take quite a while since you are just executing UI actions in sequence. The whole thing binds you to an UI-based workflow. nexrender seems to be a pretty cool tool on top of AE to fix this.

Other interesting links are ae-to-json and aepx.js.

Using HTML/JS

This document discussed a number of options.. However, there are essentially two approaches:

captureStream()

When animating on a Canvas, the stream can be captured. This should hopefully give a pretty good framerate. You are essentially going through the normal browser runtime / render loop with this. See:

BTW, the guy who posted these snippets has the most amazing personal projects.

Frame-by-Frame

This requires your code to be written in such a way that you can render each frame individually. It is then possible to capture then individually. This approach is used by:

Any animation library which lets you go to a particular target frame is suitable for this, including GSAP, Scene.js and anime.

Other things I found which are cool: A timeline widget, another one.

Other Software:

  • VideoPuppet – create videos via Markdown Scripts.
  • Editly – command-line video editing.
  • Slaptrash – create videos via special  HTML instructions.
  • Kinetophone
  • lang.video – a language for making movies (based on Racket)
  • manim – animation engine for math videos
  • vapoursynth – video processing framework
  • Komposition – Not for scripting, but a UI workflow for screencasters.

Generating TypeScript for GraphQL queries with multiple schema endpoints

I am aware of two tools to generate types (TypeScript, but they also support Flow) for your GraphQL queries:

Both seem to work fine as-is, but I am dealing with the extra challenge of having two separate schema endpoints in my project; that is, some queries you will encounter in the source point to a GraphQL Server A, other points to GraphQL Server B.

Because you can point apollo-tooling only to a single schema, the client:codegen will fail when it encounters a query for any otherm, since it will notice that the types do not match.

graphql-code-generator has the limitations that it only works with separate .graphql files, so it will not find gql tags in your code and extract the queries from there. What you can do here is specify which .graphql files to include, so you can name your files accordingly: .schemaA.graphql, .schemaB.graphql.

Conveniently, it allows you to define multiple targets with their own source files and schema in one config file.

apollo-tooling only allows a single schema/configuration in the config file, but you can just use two separate config files, or, pass options in the CLI. How can we target our gql queries to each schema? We have to use Apollo’s ability to specify the name of the graphql tag used. Usually, this would be:

import gql from 'graphql-tag';

const query = gql`query Foo { bar }`;

Instead we can do:

import gqlA from 'graphql-tag';

const query = gqlA`query Foo { bar }`;

And then we can run the code generator:

apollo client:codegen types/ --target typescript --tagName gqlA --outputFlat