S3 permissions are confusing. The key for me was to understand there are three levels:
- ACL polices.
- “Simple permissions” on objects and buckets.
- Global disabling of public access for a bucket.
S3 permissions are confusing. The key for me was to understand there are three levels:
Do it yourself. All of these tools uses Chrome Headless, so you can just use:
Related tools:
A note on speed: Generating images via Chrome Headless can take some seconds. Unless you are pre-generating them, the delay in making them can cause them not be be shown. Seems the behavior depends on the app. Telegram and WhatsApp start loading them when the link is pasted, even if the message is not submitted. If you submit the link before the image is ready, Telegram seems, usually, to attach the social card later once it is ready. WhatsApp Web however will not – if the link is sent early, there will be no social image.
The problem is described at length here.
The short version is this: When rendering on the server and calling rehydrate
on the client, react expects the markup generated on the server and the render-pass on the client to produce the same results.
This can be problematic if the server lacks certain information that you have available on the client, and needs to use a fallback value, while on the client-side you use the correct value. Common examples:
What happens if the markup does not match? There are two options:
className
not updating.What are our options?
useEffect
or componentDidUpdate
to re-render with the client-side value. It may be desirable to use useLayoutEffect
.I’ve recently started to think about how to generate videos using code. Here are some links I found.
Python based library to script videos. Works nicely and cleverly wraps the imagemagick and ffmpeg binaries. However, your quickly down to manipulating image bytes manually once you want to do anything beyond what they ship – or you find some other library to generate the image data you want. In the end, this is a helper layer on top to generate frames being sent to ffmpeg.
This has a lot going for it. The experience working with ExtendScript is not great, but it is not awful either – you can now even right and debug in VSCode. Obviously AfterEffects lets you do a lot of powerful stuff. But creating a longer sequence can take quite a while since you are just executing UI actions in sequence. The whole thing binds you to an UI-based workflow. nexrender seems to be a pretty cool tool on top of AE to fix this.
Other interesting links are ae-to-json and aepx.js.
This document discussed a number of options.. However, there are essentially two approaches:
When animating on a Canvas, the stream can be captured. This should hopefully give a pretty good framerate. You are essentially going through the normal browser runtime / render loop with this. See:
BTW, the guy who posted these snippets has the most amazing personal projects.
Frame-by-Frame
This requires your code to be written in such a way that you can render each frame individually. It is then possible to capture then individually. This approach is used by:
requestAnimationFrame
, Date()
and so on.Any animation library which lets you go to a particular target frame is suitable for this, including GSAP, Scene.js and anime.
Other things I found which are cool: A timeline widget, another one.
I am aware of two tools to generate types (TypeScript, but they also support Flow) for your GraphQL queries:
Both seem to work fine as-is, but I am dealing with the extra challenge of having two separate schema endpoints in my project; that is, some queries you will encounter in the source point to a GraphQL Server A, other points to GraphQL Server B.
Because you can point apollo-tooling only to a single schema, the client:codegen will fail when it encounters a query for any otherm, since it will notice that the types do not match.
graphql-code-generator has the limitations that it only works with separate .graphql
files, so it will not find gql
tags in your code and extract the queries from there. What you can do here is specify which .graphql
files to include, so you can name your files accordingly: .schemaA.graphql
, .schemaB.graphql
.
Conveniently, it allows you to define multiple targets with their own source files and schema in one config file.
apollo-tooling only allows a single schema/configuration in the config file, but you can just use two separate config files, or, pass options in the CLI. How can we target our gql
queries to each schema? We have to use Apollo’s ability to specify the name of the graphql tag used. Usually, this would be:
import gql from 'graphql-tag';
const query = gql`query Foo { bar }
`;
Instead we can do:
import gqlA from 'graphql-tag';
const query = gqlA`query Foo { bar }
`;
And then we can run the code generator:
apollo client:codegen types/ --target typescript --tagName gqlA --outputFlat