Skip to content

Conversation

@afaraldo
Copy link

Now the gem is compatible with ruby 3.2.2

There was a change in psych from 3.1 to 3.2.2 that throws the following error Tried to load unspecified class: Date (Psych::DisallowedClass)

@mort666
Copy link
Owner

mort666 commented May 26, 2023

Hi, thanks for the PR, am surprised anyone still uses this tbh. I will give it a once over later today.. Also noticed some deps need checking alongside from dependbot.. So will merge and push a updated gem later..

kingdonb pushed a commit to kingdonb/stats-tracker-ghcr that referenced this pull request Jun 22, 2023
mort666/app_version#13

Signed-off-by: Kingdon Barrett <kingdon@weave.works>
@kingdonb
Copy link

I just stumbled on this gem, and it's neat! Does exactly what I wanted, but better than I was looking to do it on my own...

I'm using @afaraldo's branch for now and it's working with Ruby 3.1.4 as well 👍

kingdonb pushed a commit to kingdonb/link-anchor-checker that referenced this pull request Aug 7, 2023
Signed-off-by: Yebyen <yebyen@gmail.com>

Highlight the README of lib/

Signed-off-by: Yebyen <yebyen@gmail.com>

run 'make test'

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

Upgrade to Ruby 3.1.4

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

rewrite my_wasmer to handle several caches

one for every project/repo/package (image == package)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

git commit

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

fixup formatting when we call create_leaf

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

we did it!

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

delete things we didn't need

I did this commit on stage

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

initial empty commit

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

scaffold Gemfile + README

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

.keep files

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

database config with superuser

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

app, test, bin, public

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

delete git_hub_org.rb

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

add the Kubernetes YAML per the cli docs

$ be ruby cli
Commands:
  cli controller ORG  # Create a Project for the GitHub ORG and start reconciling it
  cli help [COMMAND]  # Describe available commands or one specific command

We implement this line:
Create a Project for the GitHub ORG and start reconciling it
---
if the Project hasn't been created yet, else just reconcile everything

The controller reconciles both Projects and Leaves

It also creates Leaf from each project, and each Leaf executes the Wasm
as well. The status on the Leaf gets updated then.

In the next iteration of this project iterator, we'll make sure that
Leaf and Project log their activity back to the database with Rails.

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

ensure that GithubOrg is created for fluxcd

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

Alright so the leaves create a db footprint

Now we can do after-the-fact measurements of what the result of our
Kubernetes operator things have been, without putting further strain on
our Kubernetes API service to build out the analytics. Thanks, Rails!

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

add run fiber to each database model

When we create a database row, we can watch our associated Kubernetes
resource and do whatever it is we're supposed to do to make sure that it
becomes Ready (then update accordingly and mark it as Ready, I think!)

The goal is for the top-level fiber GithubOrg to count how many Wasm
modules it fires. It should then wait until all its Leaves (Packages)
have been created, register more health checks for each of those, and
only mark the top-level GithubOrg as ready when all of its Packages are
finished. Then, we can take the Measurement snapshot and shut it down!

It remains subtly unclear how each of these health checks should work,
and how much responsibility each should have. Fibers should enable us to
avoid worrying too much about blocking and non-blocking operations, but
must keep the order straight else it may yet remain at risk of deadlock.

Parents keep count of their children, they care for (wait for) each one
to appear inside of the database. When they have finished their useful
life, the parent receives the signal to die, which triggers the delete
method.

All meaningful cleanup happens there, including sending the terminate
signal to any children (by deleting their Leaves, which has another
delete method to trigger, and each child cleans up after itself.)

We may consider the delete method as the place to push metrics, with
this pattern in place the path to "Serverless" is almost clear ahead!

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

increase the test difficulty

it has to handle 64 bit integers now (!!)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

migrate download_count to bigint

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

Upgrade download_count field to a bigint

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

Completes in about 20s

fresh packages count: 0 (expecting 16) #######
fresh packages count: 7 (expecting 16) #######
final packages count: 16 (expecting 16) #######

The overall time spent is close to 20s now, but it will be much faster
on a Kubernetes control plane that is optimized and a better network.

The main delay is most likely due to the single-threadedness of Ruby
fibers. The concurrency model is a good experience, even if performance
can suffer!

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

remove instances of 'fluxcd' magic string

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

add placeholders for conditions

This should make it possible for children to record for their loving
parents what are the current results of their reconciling attempts.

It may make some new requirements on parents, for instance populating
the initial status with a reasonable structure with the required items
in order to validate the create/apply time resource when it is new (?)

No, I think you can just create a new resource without any Conditions
and that is "Normal" – it's up to the reconciler to create the Condition
items and reflect any changes with Condition updates to show an observed
generation for that condition.

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

add some associations

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

add basic updating on run

I think we don't really have to implement kstatus conditions to get this
over the finish line

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

It has some version of health checking

It is neither Kstatus Ready true/false state, nor Conditions
We are not consistently using the logger either, because reasons

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

at the end of the demo both GithubOrg and Packages

should have received an update, so their updated_at reflects the last
run time.

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

clean up migration history

packages.download_count has always been bigint

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

separate database connections

and separate processes with `foreman start`

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

reverse order of operations for Sample.ensure

run the sample insert script before we start the reconciler
(since we're not doing concurrency, the reconciler will never yield back
to us, so we need to take care of this ahead of time)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

Procfile

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

no concurrency

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

make some enhancements to no-concurrency mode

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

make sure we sleep long enough to delete

the leaves all are not getting deleted when we exit this fast

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

solidify improvements

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

display count while garbage collecting

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

additionalPrinterColumns

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

fine tuning

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

proj, l (shortnames)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

make (time foreman start)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

set more consistent use of time

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

fixup db schema

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

record measurement finally

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

loosen time constraints

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

implement conditions into concurrency

sad news: it doesn't appear to be any faster (yet)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

I think I have understood the YAML::Store cache

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

add another sleep 5

We haven't got it quite right, if we're exiting the health check before
leaves are finished, but it doesn't hurt to sleep after the measurement!

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

don't emit so many events

also, table sharding experiment for now (it failed)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

it isn't slower

but it isn't faster... but it implements conditions!
and it models fast feedback, among other good qualities

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

let's run "make" in CI

and ensure that stat.wasm gets committed into the resulting Docker build

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

base

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

add Makefile to dockerignore

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

update Dockerfile to use pushed base

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

don't rebuild base every time docker is built

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

add foreman to base

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

add 'gems' target for Gemfile

cargo build (wasmer) requires the interactive shell for some reason
during bundle install

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

do gem caching

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

fixup Docker build

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

Docker build fixup

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

tests passing

try break some stuff (hopefully this doesn't break anything)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

exclude cache/* from git

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

add workflow to build a clean cache

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

set up workflows for building cache + cleaning

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

add empty option

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

remember to check out code and include language

ruby and rust runtimes/build contexts

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

latest setup-ruby

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

packages: write

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

build base

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

try to get everything right

first published "base" image

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

add target wasm32-wasi

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

build binaryen (wasm-opt)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

base can be a cache image too

in case you have just built base for the first time, you should build
the gem-cache tag with CACHE_TAG=base next

Then you can build gems, and set gems as the new CACHE_TAG for your
deploy build from then on. (I think?)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

add wabt

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

setup-wabt@v1.0.0

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

only install wabt for base image

We are only compiling stat.wasm in the base image

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

adopt setup-wabt

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

1.0.2 setup-wabt

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

setup-wabt 1.0.3

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

setup-wabt 1.0.4

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

try adding wabt (wasm-strip) to the path

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

try fixing wabt installer to use cache too

we don't need all the wabt binaries, just the wasm-strip

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

mkdir local/bin earlier in workflow

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

cp -r is not specified

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

don't try building binaryen in our workflow

that's insanity, GHA will only provide us with amd64 runners so that's
what type of binary we can expect to use, (and Binaryen provides one)

We don't really need to do the wasm stuff in multiple architectures
since our output stat.wasm is architecture independent (and will likely
be compiled again later by cranelift or something else on the user side)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

try to reify cache to include stat.wasm

when base build is finished, we can store stat.wasm and bring it back
as needed for building targets of other stages

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

fixup syntax error / inconsistency

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

try to mitigate cache

Post job cleanup.
Warning: Path Validation Error: Path(s) specified in the action for caching do(es) not exist, hence no cache is being saved.

there is no env.HOME

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

set a new default for publish cache tag parameter

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

add env.IMAGE_TAG

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

try restoring these with `docker pull` first

if the built tags are fresh enough, they can be used as cache as well

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

try adding bundle config directly in gem cache

I think that's the only place where it's going to be effective

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

To rebuild base

this fix is needed

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

fix some more discrepancies

the `gems` image does not pass `bundle check` without this
the `deploy` image can now assume that `gems` will pass bundle check,
(and so it does not need to mess with /usr/local/bundle/config anymore)

The values of BUNDLE_PATH, GEM_PATH, and GEM_HOME ought to be set by
default to /usr/local/bundle, they are kept here for reference in case
of nasal demons or whatever.

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

try caching again

I guess we don't build deploy from gem-cache, but from gems

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

The End of Caching

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

and a nod to provenance belongs here

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

some of these instructions are wrong

let's fix them... (only fixes some of the wrong things 🤞🤡)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

let's try to make local dev work again

I know this worked before, even if we can't always depend on CI we
should be able to get most of what we need from these caches (and use
them to quickly bootstrap the local dev environment)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

it's faster to build gems locally on arm64

rather than to wait for github actions to do it with amd64 + qemu

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

it is faster to cross-build locally

it is much faster to build gems-base locally, then tag gem-cache and
make docker

Pushing layers takes a lot longer than anything else.

It ran bundle install and was finished in a second. It did not need Rust

The deploy image should be small and fast

It is taking over an hour to build gems as multiarch on GitHub, but very
thankfully we don't need to build the gems often. We can schedule it as
an overnight job that happens once a week, or more like twice a month!

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

fixup README to be true according to what we learn

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

we have provided for stat.wasm

so `make lib` is not needed, only `make test`

Also, remember to make a multi-arch Docker `latest` deploy target

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

trick Dockerfile into thinking stat.wasm is newer

Makefile is trying to rebuild stat.wasm, which fails because cargo is no
longer present in the deploy context

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

more explicit forcing

Make is noticing that one of the dependencies is missing and using it as
an excuse to rebuild stat.wasm (stat.wasm.raw)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

make has -t (--touch) for this

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

add HOST parameter to secrets

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

use environment variable in ancillary db handlers

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

add more permission and a new auth mechanism

for app/models/measurement singleton

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

I don't know if we need this ClusterRoleBinding

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

I guess it was necessary

we need at least list CRDs

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

maybe we didn't

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

Revert "I guess it was necessary"

This reverts commit 1e116b52d64a50e129201110d28877db48ea3257.

I guess it was necessary

we need at least list CRDs

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

fixup kubernetes default endpoint

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

add concurrencyPolicy

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

tighten up clusterrole quite a bit

from cluster-admin to list and get CRDs, (which we need in order for the
client to reflectively discover leaf and project APIs, yay kubeclient!)

also, add "update" since it seems to be used by kubeclient as well

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

in lieu of production deployment for now

we'll have this one environment, which gets two hostnames (and the DNS
is configured by hand, at DigitalOcean, to match whatever test cluster
is in vogue right now...)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

I think it's list outside namespace

The operator isn't built to be operating confined to a single ns

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

add events to rbac

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

add CI build workflow for main branch

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

add missing "break" statement

we should not issue the "extra" delete twice

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

rescue Date::Error

this can be false

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

that wasn't exactly exiting cleanly

it's still supposed to clean up after project gets deleted

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

decrease the amount of waiting

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

add hosts to environment development

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

try to reconcile all leaves on startup

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

try to introduce debugging, Stalled condition

when we detect the resources should have progressed by now, we can set
the Stalled condition that will trigger them to be reconciled again

While introducing the Stalled condition, I believe I have accidentally
solved recovery as well (🥂🚀🎉✅😁)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

upgrade Ruby, install rspec

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

delete vendor/

we won't be vendoring any gems or javascript

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

bundle update

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

move things around so upsert method is not so big

this shouldn't have changed any of the behavior, but it makes clearer
how convoluted our upsert method has already gotten...

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

factory_bot_rails

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

implement AR::BaseConnection into leaf_reconciler

implement it here first, reuse it everywhere later

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

factory_bot (bundle install)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

bundle update (puma)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

attempt to boil the ocean

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

debugging

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

reconcile_leaf.rb

It works but it crashes

@end_line=148, @end_column=47>]:Array (NoMethodError)
leafer.1  |
leafer.1  |       @selector.select(@timeouts.interval)
leafer.1  |                ^^^^^^^
leafer.1  | 	from /Users/kingdonb/.rvm/gems/ruby-3.2.2/bundler/gems/fiber_scheduler-442188f8c752/lib/fiber_scheduler.rb:122:in `run_once'
leafer.1  | 	from /Users/kingdonb/.rvm/gems/ruby-3.2.2/bundler/gems/fiber_scheduler-442188f8c752/lib/fiber_scheduler.rb:116:in `run'

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

exclude leafer from foreman so can run it by hand

it crashes at the end, so something still is not right, however
it seems like it performs, and we may be able to put this one back in
the rotation without any further changes (except this will need to be
reverted)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

process pool didn't work

not sure why, but this version is based on Ruby 3.1.4, it looks like it
works damn near every time, it doesn't crash until the end, and it's in
perfect shape to be shipped as the cronjob right now

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

adopt BaseConnection class one last place

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

revert to rubygems.org kubernetes-operator

GitHub will not permit me to pull in the bundle gem from GH Package
Registry without authenticating myself, which I cannot easily do inside
a Dockerfile, (and we don't need this since we didn't ultimately take
the Ruby 3.2 upgrade path for now)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

rake app:version

$ rake app:version
Application version: 0.0.1 (187) by Kingdon Barrett on 2023-06-22

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

app_version gem is an oldie but a goodie

mort666/app_version#13

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

reorganize manifests into bases

podinfo is the inspiration for this design, we'll have deploy/bases and
deploy/overlays for now (with each environment as a folder in overlays)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

add kustomization.yaml for each base

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

rotate credentials.yml for production

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

hotfix

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

rake app:render

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

lograge (Gemfile)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

initializer (lograge)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

Health check route (200)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

RAILS_ENV production in cronjob

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

add dev and production overlays

change RAILS_ENV variable and deployment namespace

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

set /healthz as health endpoint

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

Let's attempt to publish 0.0.1

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

checkout before prepare step

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

set up ruby before rake

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

must bundle install before rake will work

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

populate bundler-cache from main branch

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

bundle lock --add-platform x86_64-linux

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

Release v0.0.1

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

hotfix workflow

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

hotfix workflow #2

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

update production overlay (0.0.1)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

we can fetch GithubOrg before health check

If we fetch GithubOrg too early, it won't tell us how many packages to
expect here

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

bump version in template, overlay, rake app:render

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

update checks in README (#29)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

add Execute workflow (#30)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

kubectl apply -k

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

Add a "publish wasm" step to tag release workflow

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

bump version in template, overlay, rake app:render

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

fixup publish workflow

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

change execute workflow to fetch stat.wasm

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

set release_name

Docs here suggest that release_name is the tag by default:
https://github.com/svenstaro/upload-release-action#input-variables

IME it looks like the commit message of the tagged commit is actually
being used, which can make some ugly release messages (at least until
the release commits are being generated by a release manager robot!)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

add sbom and provenance, add cosign

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

add release machinery to Makefile

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

echo current version during 'make prerel'

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

Release 0.1.0

make prerel

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

fix minor issue in publish-tag workflow

Now it should not print "vrefs/tags/0.1.0" in the release notes title

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

Do these docker pull commands have any effect

I don't think so

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

I hope these docker pull commands had no effect

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

signatures slow things down pretty badly

if we want our development builds to return in less than 5 minutes,
they'll have to be unsigned and without SBOMs or provenance attestation

(I am not sure I had this set up correctly to begin with, there are many
warnings in the output from "syft", I don't know how to read it at all.)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

add builder id to publish tasks

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

Try setting up cache manually

The Rust setup action considers the package root (.) to be the
workspace, in our case it's actually lib/stat

It is explained here in the docs that another action is used for the
cache config, and it's possible to disable the auto-configured thusly
and replace it with your own incantations as this commit does.

Hope this works 🤞🚀

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

We'll only use the rust cache in the base image

We're not building that wasm except in the base image, and at release
time

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

note: base still not publishing SBOM or signatures

gems, gem-cache, and base image are all unversioned and unsigned

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

don't need either of these

Don't need Ruby unless we're building something outside of the Docker
container, and...

We don't need the id-token permission if we're not signing anything

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

mirror caching configuration from publish

in the publish-tag workflow (this should make building rust modules in
the "publish-tag" workflow much faster!)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

Release 0.1.1

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

de-emphasize the "latest" image

nobody should be pulling `ghcr.io/kingdonb/stats-tracker-ghcr:latest`
Now that we are doing versioned releases, set imagePullPolicy to
IfNotPresent and use the versioned artifacts, (or use the canary release
if you're absolutely in a dev environment!)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

change from latest to canary in manifests

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

set imagePullPolicy in production

we should override this in the dev environment, but I want this change
to take effect in production so I'll write it there for greater clarity

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

Release 0.1.2

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

let's change the definition of prod

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

add two new CRDs for new reconcilers

we will now reconcile PackageVersions when we want to know about the
versions of a package, and PackageVersions::VersionLeaf is similar to
Project::Leaf in the existing controller pair.

Let's also finish implementation of Conditions here (in the following
commits)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

these are the new entrypoints we'll need

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

after reviewing the site structure, I think...

we can probably do this without a VersionLeaf

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

move patches related to ImagePullPolicy into dev

the default ImagePullPolicy is IfNotPresent

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

let's try to do this without any others

we only need one reconciler because there is only one layer deep
(there's no need to fetch multiple pages for each package, and we
already know which packages we want to monitor versions for, so there's
no need to rewrite GithubOrg or "Project" because there is no second
fetch.)

This also means we're not writing a second Wasm, lol

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

ensure sample

these are the seed records (this is not a real controller either)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

we also will not need this

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

rails g model Version, VersionMeasurement

rails generate model Version package:references version:string download_count:integer
rails generate model VersionMeasurement package:references version:references count:integer measured_at:datetime

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

rails db:migrate

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

add models

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

add default tests (rspec, factorybot)

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

fixup PackageVersion sampler

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

let's test with multiple PackageVersions

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

Release 0.1.3

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

some errors in release 0.1.3

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

Release 0.1.4

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

last ditch fix effort

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

Release 0.1.5

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

forgot to do a thing

don't skip steps when you're releasing to production, always test in dev

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

Release 0.1.6

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

measure many versions

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

don't abort so harshly on alternative sample kinds

Signed-off-by: Kingdon Barrett <kingdon@weave.works>

Release 0.1.7

Signed-off-by: Kingdon Barrett <kingdon@weave.works>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants