Internet Highlights XIII

Published: 2025-07-14

I know it's been a long time since the last post, sorry but it's life. Here it goes:


Internet Highlights XII

Published: 2025-04-24

I know it's been a long time since the last post, sorry but it's life. Here it goes: See yall.


dev log: New git hooks

Published: 2025-04-05

Just want to give a quick update on my infra setup. After getting tired of doing scp to update my site and blog, I decided to use a git hook for that.

The result is a super simple git hook that updates the files changed on the commit, and for the files in www folder copy them to the git page. I know this is kinda the other way, normally those files go to the server, but in my setup they live among the git files:

terminal.pink/
|-- HEAD
|-- aims.pdf
|-- bach-keynote.pdf
|-- bach-thesis.pdf
|-- blog
|   |-- HEAD
|   |-- branches
|   |-- config
|   |-- description
|   |-- hooks
|   |   |-- post-receive
|   |   |-- post-update
|   |-- index.html
|   |-- info
|   |   |-- exclude
|   |   `-- refs
|   |-- objects
...
|-- index.html
|-- index.br
|-- index.gz
...
|-- lin0
|   |-- HEAD
|   |-- branches
|   |-- config
...
		

So using the www for those files makes it more clear that they are special ones. That let's me run my server and git under the same root, this the same URL.

This is the hook, use it as you please:

#!/bin/sh

OUT="$HOME/terminal.pink"
OPTS="--no-commit-id --no-renames --name-status --root -r"

update() {
        echo "updating files with $1"
        while read -r op f
        do
                echo "$op $f"

                echo "$f" | grep -q "/" && \\
			mkdir -p "$OUT/${f%/*}" > /dev/null

                # the rest is copied to my blog
                case "$op" in
		"D") rm -rf "$OUT/$f" ;;
		"A"|"C"|"M")
		git cat-file -p "$1:$f" > "$OUT/$f" ;;
                esac
        done <<- EOF
        $(git diff-tree $OPTS "$1")
        EOF
}

echo "starting post-receive"
while read oldrev newrev ref
do
        case "$ref" in
        "refs/heads/main") update "$newrev" ;;
        esac
done
echo "done"

The hook for this blog is just slightly different.


Internet Highlights XI

Published: 2025-01-29

This time I'd like to dedicate this wrapup to the humans, which are terrible animals, and are doing very bad things. Let me know what you think! Bye.


100 days of uptime

Published: 2024-12-24

I'm super happy to see that this very website, along with others: terminal.pink, dovel.email, derelict.garden and suavemente.org; which are all hosted on my Raspberry Pi Zero W, at my living room, have reached 100 days of uptime!

This may not be important to you but the theory behind it may be interesting, but for now I will leave you at that because I'm in a hurry.

htop showing 100 days of uptime And counting.


Hosting your git repositories - part 2

Published: 2024-12-23

This article will discuss giving public read access to your git repositories, for setting up private write permission see part 1.

Using the git dumb protocol

This is the easiest way in my opinion to let your git repo be cloned or pulled. The advantage of this protocol is that your repo can be fetched using HTTP, the git program knows how to clone a repo using HTTP and will use it automatically if needed.

This means you can serve your repository using any HTTP server, i.e. apache, caddy, tomcat etc, since your git files are just static files. There are drawbacks of course, this protocol is slow and old and has been replaced by the smart protocol, which unfortunatelly doesn't work with normal HTTP servers.

To start using this technique, configure your HTTP server to serve static files from a folder, say repos. Inside that folder you can create your bare repositories, i.e. proj1 with:

cd repos && git init --bare proj1

This will create a new (empty) repository. You can also make a bare clone of an existing repository using:

cd repos && git clone --bare [your repo url]

However, you will notice that these repos cannot be accessed by HTTP, that's because the repo is lacking some auxiliary files needed.

update-server-info

This command is used to generate or update the necessary files for a dumb server be able to serve the repo properly, you must manually run this at the time you create your bare repos:

cd proj1 && git update-server-info

Now you should be able to clone or pull from this repo. Now, to run this automatically every time you push to your repo you can use the git hook strategy, so you always serve updated content.

Using git hooks

Git hooks are your friend and they have even more use for dumb servers, to run update-server-info for every push I chose the hook post-update, which runs after the new commits are added, the sample that comes is good enought for my distro. If not just add the command there and rename the hook without the .sample suffix.

This is the way I configured my project lin0. But there are still improvements for this setup. To be able to serve the files from the repo I also added: git --work-tree=tree checkout -f main which is the index page you see. In the future I will move this to remove the tree path.

Other options

This short article showed the dumb server approach, but there are many others, some I'll probably publish in follow ups. Some options are: There are much more, but these are some good starting points. Each has its own pros and cons, in the next article we will discuss some of them, for me what is working better is this dumb protocol and git hooks combination, but this is a matter of personal taste and requirements. Thanks.


Internet Highlights X

Published: 2024-12-10

That's all folks, happy holiday.


The gemini web and this blog

Published: 2024-12-05

As many of you already know this site is also available on the Gemini Web, not Google's Gemini. This is somewhat new and I stil need to do some ajustments, as converting from HTML to gmi markdown is not perfect. In fact many things in my setup are not perfect.

First, my gemini server can only have one domain per machine, so that means only my blog is available now. My current workflow is to create the HTML content and then use html2gmi to create the corresponding .gmi file, regenerate the feed and then upload everything to my raspberry pi using scp. The issue here is the conversion, as posts are written with HTML in mind, the html2gmi tool has its own style that doesn't fit mine well. Lastly, my rss feed generator can't extract info from gmi files and thus only the link is shown.

Next steps are tweaking my publications, the rss generator and the conversion tool to create better .gmi files, ideally this should change links to .html files to the respective .gmi file, and add metadata to posts somehow, since there is no metadata support in the gemini markup spec.

I plan to improve support for gemini web on this site and on the tools that I use, I hope this will incentivise folks to use this new and incredible publication platform. If you want to keep track of this developments here are some links:

Thanks!


Internet Highlights IX

Published: 2024-10-19

That's all folks, enjoy!


Announcing Lin0 v0.0.1

Published: 2024-09-29

I'm pleased to publicly share that Lin0 has reached v0.0.1 and can be downloaded or built for end users.

Lin0 (Linux zero) is a super minimal source-based linux meta-distribution, primarily aimed at power users or minimalism enthusiasts. It was born from exercises in how minimal a linux system can get.

At this time we provide 3 optimized images: Pinebook Pro, RPi3B+ and HP Elite Desk G1 800, as well as generic amd64 and arm64 images. In any case you can clone the repo and build for your architecture.

Lin0 leverages a small system built from scratch using a selected list of software, which includes:

Logo

             _._            
           e/` '\,.eo-__.   
          '/.' .|_/e--. '\e 
    ,;-o-.'|`  //e    e\ |` 
  ./' ,e0\o   //-o.__. ,. \'
 ./` /' -/e   e\o_/___.  \|'
 e|`/`,o-o\  /v-/e_.  '\. \.
'/  ._e._, \ //    \`  \e |'
'|'"/     \.V |    `|' `|'|`
`|e`|'    | # /    `|.  |`|`
e|` `    /\- ,\    e|'  '\` 
 '`    _/  / / \     `  `|' 
   .,wW'^^^//;^-^;^;w_      
	
The logo is a willow tree, which reminds me of the wise tree in the Pocahontas movie, it is also a small homage for my son William.

Obtaining

Please refer to the project site for the download links, there are tarballs, a docker image and instructions to build your root filesystem.

Roadmap

The big plan is having a complete project with many supported targets, I plan to provide images for more platforms but that's the hard part, since I do not have the hardware right now. Other things planned are:

Contributions can be made by email using the common git email workflow. More info on the project's page. The mailing list is also very open to discussions and side topics, you are very welcome.

Thanks!

So far I only published it here and on TabNews, and I think it had a good reception. Finally, I'd like to emphasize my commitment to creating open and quality software that can make a change to a better software ecosystem, and Lin0 is my current endeavor for it. Let's build together a great community for Lin0!


Internet Highlights VIII

Published: 2024-09-22


astro is now ereandel

Published: 2024-09-01

My astro project was renamed to ereandel due to packaging needs, as there is another project named astro.

Earendel is the most distant star not belonging to a cluster, also, Eärendil (a close pronunciation) is a character in the J.R.R. Tolkien's universe. Thus the perfect name for this project (with a twist).

Thanks to this, Akash Doppalapudi packaged it to the Debian project! Which is great news. Having a project published on the Debian repositories is a milestone, I hope this brings new opportunities for the project. Kudos to Akash for packaging ereandel and for having the patience that the Debian packaging guidelines require.


Our site accesses

Published: 2024-08-19

Just out of curiosity, this is a plot of connections to the domains hosted here: plot of connections to my domains

Some considerations: the numbers represent succesfull connections to my server, that is a TLS handshake.


Internet Highlights VII

Published: 2024-07-08


astro: 50 stars!

Published: 2024-04-22

Just wanted to share that a project of mine, astro, has reached 50 stars! That's my project with most stars at this moment, I'd like to thank all contributors, stargazers and people that helped the project reach this cool milestone.

I'd like to reiterate that the astro project is under development, and well maintained by me and open source contributors. Looking ahead I plan to add more features such as:

And of course, listen to users' feedback and implement desired features. So if you want a feature feel free to open an issue and comment.

Thank you!


dev log: building my own distro

Published: 2024-02-26

Another adventure have started: I'm builing my own distro! I'm not sure about the motivations as it started naturally from my fidlings with OSs. Some of the things I remember are:

Turns out there is a cool project called buildroot that makes building a minimal linux system very easy. So I used it to create the linux kernel and userspace programs for a RPi 3B+ I had lying arround.

The result was a system that does the minimum: boots the linux kernel without initrd directly from the RPi bootloader, it then runs a simple shell script as init and drops to a shell.

This system is taking only 22MiB of RAM, which 14 are cache, and 500MiB of disk, which 470MiB are for git binaries. I am very happy. Obviously there are drawbacks: no module hotplugging, this gave me a headache as my keyboard needed a kernel module, it took me a day to figure this out. I tought this was the kernel's job.

So far I'm having a great time learning, turns out there is a lot of details we usually don't need to know. For example, firmware, which for the RPi is a must: in order to load the brcmfmac module its firmware must be present in the correct place. If not whenever you modprobe it you'll get a subtle timeout error.

Luckly buildroot also facilitates this, just select the corresponding option in firmware section. The next steps are now building sbase, ubase and sdhcp. I also included a tiny c compiler so I can compile the rest of the system.

So far this is the init script:

#!/bin/dash

echo "Init started"

export PATH=/usr/bin:/bin:/sbin:/usr/sbin

mount -n -o remount,rw /
mount -t devfs /dev /dev
mount -t proc /proc /proc
mount -t sysfs /sysfs /sys
mount -t tmpfs /tmpfs /run

modprobe hid-apple
modprobe brcmfmac
agetty -n --login-program /bin/dash tty1

shutdown -h
		

There is too much to do, still. I'll keep you posted.


jumentosec

Published: 2024-01-21

A friend of mine is launching, according to his words: The #1 Underground & Vendor Neutral Security Conference in Brazil, which will host a conference this year!

These are the links:

Looking forward to it.

Internet Highlights VI

Published: 2023-12-19


Introducing Bytes

Published: 2023-11-05

Bytes is a new series os posts that are starting today. sometimes I notice or think funny things, so I decided to publish them. Here it is for today:

I was watching Monk and there is one scene that a boy coughts near Monk. The funny part is that this boy only shows up in that scene, so I wanted to watch the episode's credits just to see that.

This is the scene the coughing boy shines on his participation:

scene with the coughing boy

and the credits screen:

credits screen with the coughing boy's name

another great episode.


So I bought the pinebook pro

Published: 2023-10-19

It is usable 97% of the time, this 3% is related to a weird bug with the keyboard, the touchpad and screen sharing, video calls; but I'll explain. My current setup is Arch linux with DWM. Here are some points:
Body
The pinebook pro laptop is lightweight, sleek, compact and pretty; it has a premium feel given by the magnesium shell and a gorgeous display. microSD slot is also very good to find.
Battery
It has a decent battery, frequently i can use it all day long without a recharge. i'd only remove the barrel port, as the USBC port can be used for charging, so no real need for it.
Touchpad
The touchpad feels a little weird sometimes, it is a little fuzzy and I have difficulties using the right click, I don't get it every time. The website says it's a large one, but I don't think so. I'd have it a little larger and make the click feel the same place everywhere, it clicks only on the bottom.
Keyboard
Very good one, not the best, the only thing that bothers me is that sometimes I get a doubled key, may be a firmwware thing.
Camera
People could see me clearly, sometimes it gets really dark I don't know why, may be some misconfiguration on me. The real issue is processing power, when I turn it on the laptop becomes unusable, totally lagged.
CPU/GPU
I think this is the only thing that bothers me everyday, it is slow and there is no hardware acceleration for the graphics driver at this time of writting. I use very light software and it sometimes lags. But software support for most applications is fine, only missing OSS.
misc.
This is very personal: the only design detail that is not what I like is the display being a little bigger than the bottom part, so when it is closed the lid is not aligned, but that's really a manner of taste. The pine website doesn't deliver to brazil and that's a huge bummer, specially because they do not answer any email or support ticket, horrible customer service.
In sum it is a great linux laptop specially if you consider that it is only 219.99$, I hope Pine64 continue to improve it, they are doing a great job for the linux/BSD ARM community.

Please if you know how to solve any issue I found please reply this post. Thanks!


Internet Highlights V

Published: 2023-10-13

this time i'll add descriptions, per my last feedback.


Internet Highlights IV

Published: 2023-03-31

here we go again...


My blog posting work flux

Published: 2023-03-30

In this post I will talk a little about my workflow for publishing content. I find it quite simple and therefore decided to share it with you.

The first part is to know the infrastructure that I built for my server. My blog and homepage resides on my Raspberry Pi Zero W which is connected to my router with a small USB cable to receive power. And the router is connected to a UPS. This part was done this way because I wanted to venture into the world of hosting. And I liked the results, but it is much easier to pay for a VM in any cloud.

Router configuration

This part is not difficult, the goal is to route traffic to my server, for that I entered the Arris configuration and created a virtual host, I put a static IP for the RPi and added a port forwarder from port 443 to 4433. This way I can upload a service without needing root privileges.

Some optional things that I decided to have, and with great difficulty, were:

Undoubtedly, this was the saddest part of the setup. However, these optionals facilitate me in the code, since I don't need to configure DDNS. And this prevents interruption of access as my DNS always points to the correct IP.

The server

Now we come to the actual programming. The server is written in C and listens to an unpriviledged port, so I run it as a normal user, which gives me more security and simplifies the process a lot, since my user has all the permissions for the publishing flow. The server's code can be found on GitHub: servrian.

In the server, I decided to use static pages, so servrian only works for that case. For the articles, I just write them in HTML.

Adding content

Now that all the configuration and development work is done, creating and deploying content is simple:
  1. Write the content
  2. Update index page
  3. Update feed and sitemap
  4. Run scp

Conclusion

It wasn't an easy process overall, and my impression is that we are technologically behind, as the worst part was the internet plan. If there weren't so many complications with blocked ports and network configuration by the operators, the project would have ended in a weekend (at least the functional part). Of course, styling and content development can take an indefinite amount of time. As I wanted to integrate with email the project became a little more complex.

The code part is not complicated and can be even easier when using ready-made projects or pre-configured Docker images. Here I wanted to do everything from scratch for two reasons: 1. to learn how the many parts work internally, and 2. to create a lighter version than current projects.

It's this second point that I'm most proud of, everything is very light and efficient: the blog's homepage has 2700 bytes and loads in 80ms, it's valid and simple HTML, my portfolio, the page above the blog, has 575 bytes; this allows the project to be served from my Raspberry Pi Zero W, which only needs 5V to operate. In addition, it still loads other projects like my Git and email server.

These are the difficulties you may encounter if you decide to venture down this path, at least here in Brazil. I hope I've helped in some way. I say it's worth it if you value extreme simplicity, like to do things your way, and want to get away from the dependence of the infamous big techs, libraries or frameworks, and above all, learn a lot.

Future plans

I still want to change some things in the project, out of pure curiosity and zeal:

Hosting your git repositories

Published: 2023-02-21

Setting up a git server is easy, and involves only common shell commands, this post will show you how I started my very first self-hosted git server. find one extra computer and set up a SSH connection to it, then you are ready to start. here I used my raspiberry pi, which is always up [1].

To setup the git server you should do the following on the server machine:

  1. Create git user
  2. Add sshd keys
  3. Create projects dir
  4. Create empty git repo
  5. Using the git shell

Create git user

This step is just sudo useradd -m git, no secret here. now log in this user.

Add SSHD keys

This part is the creation of the authorized_keys, so you can SSH in with this user. Basically you just add the allowed public keys here. In my case I just copied from my main user, but the ideal case is creating another key and adding that to the git server.

mkdir .ssh
cat new-key.pub > .ssh/authorized_keys
chmod 600 .ssh/authorized_keys 

Now you should be able for SSH into your git machine using the key you added.

Create projects dir

This step is optional but I like creating a dedicated folder for my projects, so I ran: mkdir git, and entered it.

Another cool thing to do is to change the default git branch:

git config --global init.defaultBranch main

Create empty git repo

This is the command that creates a repo on the server, so you can push to this repo. To create it first create a folder, and then issue the init command:

mkdir project
cd project
git init --bare

At this stage you have a fully functional git repository, to use it you proceed as you do for new repos.

Using you new repo

Now in your other machine you can init a repo and push:

cd project
git init
git add .
git commit -m 'Initial commit'
git remote add origin git@:git/project
git push origin main 

You can stop here if you want, but in this state there are some anoying things on the server that can drive you nuts, for example:

The next section will deal with these issues using the git shell and some configurations.

Using the git shell

First thing to improve is to remove port forwarding, so add this to the start of each entry in the ~/.ssh/authorized_keys file:

no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty 

Now SSH users cannot get a login shell, only the non-interactive shell we are going to configure will be available.

The we must set the git shell to be used on login. so log in your server machine, now check if the git-shell is listed on the /etc/shells file with: cat /etc/shells, if you do not see git-shell there, you can add it:

sudo echo "$(which git-shell)" >> /etc/shells 

But I advise you to use an editor. Now make it the login shell for the git user:

sudo chsh git -s $(which git-shell) 

Now you will not be able to get a shell on a log in. So this user is useless for anything else than git stuff. You can only run git commands like pull, push and clone, plus the commands we created on git-shell-commands.

Setup up greeting and commands

The git-shell can be customized by creating the folder git-shell-commands on the git home, the first and funnier thing to do it to just show a message when a login is atempted.

You can present a greeting to users connecting via SSH by creating a file named no-interactive-login in the folder we just created. It's funny, e.g.:

#!/bin/sh
echo "welcome to your git server!" 

So when git tries to log in your server this message is shown.

Adding programs to this folder will let users run them. There are some convenient commands to add, for example, creating a new repository, deleting one and listing all repos. To make them available don't forget to let them be executable:

chmod +x git-shell-commands/program 

A good starting point is [2].

Conclusion

I think this is a good configuration, it is safe and let's you fully interact with git, even creating and deleting repos.

At the same time this configuration is flexible as you can keep adding new commands. But there is room for improvements, for example, your repositories have zero visibility, there is no colaboration.

Adding public access can be done using git-daemon, or setting up the git HTTP. But those are subjects for other articles.

References

  1. My rpi, in the future it will have a PoE HAT. RPi zero W on my living room, connected to my router
  2. git-shell-commands repo

Notes


Using WWW-Authenticate for user authentication

Published: 2022-02-17

User authentication is a complicated subject, and we can make things even harder if we don't want to use cookies or javascript. authentication is a necessity for almost all web services we see, and there are many tools and protocols to aid developers on the subject. Using the www-authenticate header I was able to handle user sessions without any code on the client side, just some tweaks on my backend. The documentation for the standard is available in [1].

The http www-authenticate header is a powerful tool and part of the basic http authentication standard, but I only became aware of it recently, in this post I'll tell how I used it to handle user sessions on my project tasker.

Before we start, let me give a quick overview of the solution.

Pros

Cons

Implementation

Basically all we have to do is add the http header: WWW-Authenticate on a response with status 401. This will make browsers to show a dialog for the user to prompt for the credentials. Doing this the browser will automatically send the Authorization: ... header on requests.

This can be done when the user visits a login page or tries to access private content. In my case I added it to a login page, the code goes like this:

func login(w http.ResponseWriter, r *http.Request) {
	user, pass, ok := r.BasicAuth()
	if !ok {
		// the user didn't send credentials
		w.Header().Set("WWW-Authenticate", "Basic")
		w.WriteHeader(http.StatusUnauthorized)
		return
	}

	// check credentials
	...

	// if ok redirect the user to its content
	http.Redirect(w, r, "/user.html", http.StatusFound)
}
	

I am using the Basic authorization scheme, but many others are supported, e.g. digest using MD5, SHA256 and so on, the RFC 7616 [2] has all info needed.

Logging in

The way I designed the login flux is the following:

  1. User requests the /login page without authorization header
  2. The server responds with 401 and includes the WWW-Authenticate header
  3. The user fills username and password and request the same path but now the browser includes the authorization header
  4. The server checks the credentials and if OK sends a redirect status, e.g. 301, with the location of the new page. If not OK the server an error page, so the user can retry after a refresh

When browsers receive this header in the response they open a dialog for the user, some aspects can be set, for example, if realm is set on the header it will be displayed for the user, but be careful, this option serves a purpose, check [1] for more info. The dialog looks like this on chromium:

Chromium login dialog

Clicking sign in makes the browser to repeat the request but with the Authorization: ... header on, as I set it to Basic scheme the browser will send the credentials base64 encoded.

The nice thing about this is that the browser will keep sending the credentials to subsequent requests of the same domain until it receives a 401 status as response.

Server code for logged in users

Now the users can log in, every time a private page is requested we must check the credentials, in this project i used the Basic scheme so I check it using go's proper http functions:

user, pass, ok := r.BasicAuth()
if !ok {
	w.Header().Set("WWW-Authenticate", "Basic")
	w.WriteHeader(http.StatusUnauthorized)
	// send an error page
	return
}

// check user and pass
...

// serve your content
	

This way if a request comes unauthenticated for some reason the server will ask again for credentials. another option here would be to redirect the user to the login page.

Logging out

Logging out is done by simply returning a 401 without www-authenticate header:

func logout(w http.ResponseWriter, r *http.Request) {
	w.WriteHeader(http.StatusUnauthorized)
	// serve a logout html page
	...
}
	

Final remarks

This is the method I'm using right now and I find it pretty good: it uses only standard features that are there for years, nothing new; there is no client side javascript or cookies, which makes it easy to maintain and satisfy even the most demanding users.


References

  1. MDN docs
  2. RFC 7616

Notes

This article was featured in the golang weekly newsletter! I feel lucky, thanks Peter and Glenn.

A grayscale and red Xresources theme

Published: 2023-01-28

Recently I changed my terminal theme a bit, and came up with a smooth grayscale and red pallete. The red is to look better on e-ink displays that also have one colour, normally red. But I don't have that monitor, yet.

The theme is defined on my Xresources file, here it is:

! URxvt settings
Rxvt.scrollBar: false
URxvt*font: xft:JuliaMono:size=11
URxvt*boldFont: xft:JuliaMono:bold:size=11
URxvt*italicFont: xft:JuliaMono:italic:size=11

URxvt.iso14755: false
URxvt.iso14755_52: false
URxvt.geometry: 120x42
URxvt*metaSendsEscape: true

*.background:   #000000
*.foreground:   #dddddd
*.cursorColor:  #888888
*fadeColor: #222222

URxvt*color0:  #ffffff
URxvt*.color8: #ffffff

URxvt*.color1: #aaaaaa
! compressed files
URxvt*.color9: #888888

! types
URxvt*.color2:  #662222
URxvt*.color10: #772222

! statements
URxvt*.color3:  #cc0000
URxvt*.color11: #ee0000

! directories
URxvt*.color12: #ffffff

! some things
URxvt*.color5:  #eeeeee

! strings
URxvt*.color13: #aaaaaa

URxvt*.color6:  #222222

! base functions
URxvt*.color14: #555555

! crontab comments
URxvt*.color4:  #333333

! comments
URxvt*.color7:  #333333
URxvt*.color15: #333333
		

I intend to create a light version soon.


Internet highlights III

Published: 2023-01-23

Here we go again...


The best Web Framework

Published: 2022-11-29

In this post I will introduce you to plain, the best web framework you'll ever see. It has most features you seek on the current modern frameworks:

Pretty good huh? Let's call this framework plain, because good frameworks have catchy names.

Initializing a project

Ok so you want to use it in you project, how do you start? Simple, the output is meant to be sent as a .html file, so we just create a file named index.html:
<!DOCTYPE HTML>
<html>
<head>
	<title>Test</title>
</head>
<body>
	<h1>first heading</h1>
	<p>this is a paragraph.</p>
</body>
</html>
		

Styling your elements

Style is important, and this framework makes is super easy to add custom styles to your elements, is uses something called CSS.

There are 2 ways to add it to your pages, first one is to add a style tag to your head element, this is the most commom way.

<!DOCTYPE HTML>
<html>
<head>
	<title>Test</title>
	<style>
		h1 {
		font-family: serif;
		}
	</style>
</head>
<body>
	<h1>first heading</h1>
	<p>this is a paragraph.</p>
</body>
</html>
		

The second one is inline, so your style applies to only that element.

<h1 style="font-family=serif">first heading</h1≶
		

Using javascript

Of course this incredible framework has built-in support for javascript, because javascript is mandatory for any decent web framework. So plain comes with, not just one, but 3 ways of adding js to your pages, each with its features:

Inline

This is the simplest way of running js on you page, using this method the js script is run on the order it is encontered in the page. You can add a script, using the script tag, see this example:

<h1>here comes the js
<script>
	var s = document.createElement('script');
	s.type = "text/javascript";
	s.src = "link.js"; // file contains alert("hello!");
	document.body.appendChild(s);
	alert("appended");
</script>
		

Your scripts are already inflated and hidrated, healthy!


Internet highlights II

Published: 2022-11-02

I know it passed more than a week, so I do not plan to release them in a weekly manner, I'll post whenever the list gets sufficient content. This time I collected more than double the last post. Here they are:

Some explanation

No explanation this time, too much, but in general there are games, some funny websites, and a few have informative content.

Some sites are very small and some do only one thing, or are just one image. Those are very cool because they stick to their own manifesto.


Internet highlights I

Published: 2022-10-15

These are some interesting things I saw during the week that I'd like to share. To be quick they are:

Some explanation

p5js
I don't like js too much, but this library lets you create incredible games very easily, one example that caught my attention is: labyr.in/th.
sdf.org
In a few words it is a public access unix system, being a member grants you access to a ton of services.
l-dynamic-libraries
A neat article about linux and dynamic loading.
electrictrash
A list of interesting things found in the internet.
chiark.greenend.org.uk
Home of the putty program and host for some user's pages.
he-man
The he-man of the brazilian northeast faces 35 men in the arena.

Unmarshaling json to inner struct in golang

Published: 2022-08-10

sometimes we only want one small part of a bigger go struct while unmarshaling a big json object. with this small code you only get the inner struct that is important, see this very simple example:

{
	"Name": "asdfasdf",
	"Data": {
		"A": 12
		"B": 4
	}
}
		

suppose you only want the information in the Data object, first let's define our types:

type Body struct {
	Name string
	Data *Inner
}

type Inner struct {
	A int
	B int
}
		

hence only the Inner field in Body is desired. to unmarshal this use the code bellow:

package main

import (
	"encoding/json"
	"fmt"
)

const s = `{"Name": "asdfasdf", "Data":{"A": 12, "B": 4}}`

type Body struct {
	Name string
	Data interface{}
}

type Inner struct {
	A int
	B int
}

func main() {
	data := Inner{}
	json.Unmarshal([]byte(s), &Body{Data: &data})
	fmt.Printf("%#v\n", data)
}
		

notice that only data in on scope, the Body struct only existed during the unmarshal.

this is useful for saving a small piece of memory.


Comparing int switch case with map speed in go

Published: 2022-02-22

Continuing on the same idea from article 5, here I wanted to investigate the complexity of those two cases.

For the int type this benchmark is easier as I only vary the number of cases inside the switch, and the map keys are just the numbers.

Cases

The benchmark was done in like this:

Given a number n of cases.

I've done that for values from 5 to 10000 using a shell script to hardcode all values on the source file. The values are random strings of size 16.

Here is one example:

package switchvsmap

func switchCaseInt5(input int) string {
	res := ""
	switch input {
		case 0: res = "9U5mScxLZLbQfj7y"
		case 1: res = "P0CoRrj0x7PwOZTO"
		case 2: res = "jAbhtkEBmEXlmQ15"
		case 3: res = "2GJV7FBmYemqF7VB"
		case 4: res = "AflqdquuhoUBqYT0"
	}
	return res
}

func mapCaseInt5(input int) string { return mapInt5[input] }

var mapInt5 = map[int]string{
	0: "T5Nlajp4dWNsdAxq",
	1: "VLXyeeQVNP8LNEfS",
	2: "GfjlCRLOiv5qrj3m",
	3: "yySDbcA0Mf7J90qA",
	4: "GdKSg4pR8xpRb6Lx",
}

func switchCaseInt10(input int) string {
	res := ""
	switch input {
		case 0: res = "wylNCLjLylMZgjo9"
		case 1: res = "e6X044Vj9OyNliBC"
		case 2: res = "MfSoK3jCk3LujGUM"
		case 3: res = "FY8Z9owHtWAiA3eJ"
		case 4: res = "sof0DLcmDeR1nxGf"
		case 5: res = "Rny0iTpLbh79Tfl4"
		case 6: res = "XyFaFrCKpOLCIL6D"
		case 7: res = "OvYkV1NmdUpQA0q8"
		case 8: res = "cL7RWN0aSCEno37m"
		case 9: res = "AMgSpv4t7BN1Lj8l"
	}
	return res
}

func mapCaseInt10(input int) string { return mapInt10[input] }

var mapInt10 = map[int]string{
	0: "KlRf6TsIkvoaOf01",
	1: "IdBDDIIjLKnULDUA",
	2: "MmqxCLC4ssC8AoJo",
	3: "FJSFb2ozxQn0QRhu",
	4: "ISx7BqtOGsgcCODS",
	5: "vzHJ0gAhtj2Ejx4m",
	6: "Wm24QTHb4fVA34jt",
	7: "T6o64gYAr3N5Ppxr",
	8: "gspkuIF7rHYUySma",
	9: "SJAJSFdYKhWp7ZNH",
}
.
.
.

The complete file has 149,112 lines.

The test file was generated using the same strategy:

package switchvsmap

import (
	"testing"
)

func BenchmarkSwitchCaseInt5(b *testing.B) {
	for n := 0; n < b.N; n++ {
		switchCaseInt5(n % 5)
	}
}

func BenchmarkMapCaseInt5(b *testing.B) {
	for n := 0; n < b.N; n++ {
		mapCaseInt5(n % 5)
	}
}

func BenchmarkSwitchCaseInt10(b *testing.B) {
	for n := 0; n < b.N; n++ {
		switchCaseInt10(n % 10)
	}
}

func BenchmarkMapCaseInt10(b *testing.B) {
	for n := 0; n < b.N; n++ {
		mapCaseInt10(n % 10)
	}
}
.
.
.

Then I just ran the go test command and ploted it.

Result

graphic of benchmark time

x axis is the size of the switch/map, and y axis is the ns/op value that go test outputs.

For a big number of cases both methods perform very close, due to the big variation of speed I cannot clearly see a faster one. One thing that suprised me is that the complexity looks more logarithmic, but my expectation was to see a linear growth for the switch case.

Final remarks

We see that the switch statement is faster than the map up to 3000 cases, after that it is not waranteed. I don't think anyone would hardcode that much cases in a real life example.

The advantage of the map is that it is a dynamic structure, I mean, it can me modified on runtime, so that is not a surprise that the switch case is faster, since it is a fully determined at compile time, and thus can be otimized.

My conclusion is that if you know your cases at compile time, then use the switch case, otherwise use the map.


Comparing switch case with map speed in go

Published: 2022-01-24

I decided to do a simple benchmark between two solutions I often use in my code.

The situation is: you have an input, not necessarily known at compile time, and you'l like to assign a value that depends on it. I made this benchmark using strings and ints as inputs.

The solutions I see for this are the following:

So which is the fastest?

string input

The code is really simple. First I defined the functions.

package switchvsmap

var selector = map[string]string{
	"1": "one",
	"2": "two",
	"3": "three",
	"4": "four",
	"5": "five",
	"6": "six",
	"7": "seven",
	"8": "eight",
	"9": "nine",
	"0": "zero",
}

func SwitchCase(in string) string {
	var res string
	switch in {
	case "1":
		res = "one"
	case "2":
		res = "two"
	case "3":
		res = "three"
	case "4":
		res = "four"
	case "5":
		res = "five"
	case "6":
		res = "six"
	case "7":
		res = "seven"
	case "8":
		res = "eight"
	case "9":
		res = "nine"
	case "0":
		res = "zero"
	}

	return res
}

func MapCase(in string) string {
	res := selector[in]

	return res
}
	

And created the test file.

package switchvsmap

import (
	"math/rand"
	"testing"
)

const chars = "0123456789"

var (
	cases    = make([]string, 10e7)
)

func init() {
	for i := 0; i < 10e7; i++ {
		cases[i] = string(chars[rand.Intn(len(chars))])
	}
}

func BenchmarkSwitchCase(b *testing.B) {
	for n := 0; n < b.N; n++ {
		SwitchCase(cases[n])
	}
}

func BenchmarkMapCase(b *testing.B) {
	for n := 0; n < b.N; n++ {
		MapCase(cases[n])
	}
}
	

You can see I initialized a very big array, and filled it with random chars. That is to make tests unpredictable.

int input

I remade the tests using ints as input

var selectorInt = map[int]string{
	1: "one",
	2: "two",
	3: "three",
	4: "four",
	5: "five",
	6: "six",
	7: "seven",
	8: "eight",
	9: "nine",
	0: "zero",
}

func SwitchCaseInt(in int) string {
	var res string
	switch in {
	case 1:
		res = "one"
	case 2:
		res = "two"
	case 3:
		res = "three"
	case 4:
		res = "four"
	case 5:
		res = "five"
	case 6:
		res = "six"
	case 7:
		res = "seven"
	case 8:
		res = "eight"
	case 9:
		res = "nine"
	case 0:
		res = "zero"
	}

	return res
}

func MapCaseInt(in int) string {
	res := selectorInt[in]

	return res
}
	

I'm showing only the changes.

var (
	casesInt = make([]int, 10e8)
)

func init() {
	for i := 0; i < 10e8; i++ {
		casesInt[i] = rand.Intn(len(chars))
	}
}

func BenchmarkSwitchCaseInt(b *testing.B) {
	for n := 0; n < b.N; n++ {
		SwitchCaseInt(casesInt[n])
	}
}

func BenchmarkMapCaseInt(b *testing.B) {
	for n := 0; n < b.N; n++ {
		MapCaseInt(casesInt[n])
	}
}
	

And combined them in the same file.

Results

I used the command: go test -bench=. -benchtime 10000000x to run the benchmark, I need to define benchtime to not overflow my test data.

goos: linux
goarch: amd64
pkg: mapvsswitch
cpu: Intel(R) Core(TM) i5-4570T CPU @ 2.90GHz
BenchmarkSwitchCase-4      	10000000	        20.02 ns/op
BenchmarkMapCase-4         	10000000	        26.99 ns/op
BenchmarkSwitchCaseInt-4   	10000000	         9.188 ns/op
BenchmarkMapCaseInt-4      	10000000	        20.55 ns/op
PASS
ok  	mapvsswitch	13.841
	

So far the switch statement is faster, I was expecting that, but let's see how it behaves when we increase the input cases.


Prioridade zero

Published: 2022-01-20

Toda vez que escuto alguem falar "isso é prioridade zero", como se aquela tarefa fosse a maior prioridade eu sinto uma dor. Como assim a prioridade zero é a maior prioridade?

Prioridade zero é prioridade zero. Escute a frase.

Eu me pergunto o que aconteceria se houvesse outra tarefa ainda mais urgente que a atual prioridade zero, teria prioridade -1? Ou mudariamos a prioridade zero para 1 e a nova seria a 0? mas e se já existisse uma tarefa com prioridade 1? Teriamos de mover a 1 para 2, a 0 para 1 e finalmente inserir a nova prioridade. E assim até atualizarmos todas.

Quem usa essa essa nomenclatura parece não considerar que pode sim chegar uma tarefa mais urgente do que a atual, por mais urgente que ela pareça. Além de soar estranho, pois quando dizemos "prioridade máxima" parece um número grande, e não zero.

Então use uma escala crescente para as prioridades, pois sempre haverá espaço para uma tarefa mais prioritária: se sua tarefa com maior prioridade é a 10, e uma nova tarefa mais urgente aparece, ela pode ser a prioridade 20. Assim ela será a maior prioridade, e ainda teremos espaço para tarefas intermediárias.

Acho que o Português já indica o sentido da frase e nos orienta no modo de pensar: "a maior prioridade é...", "mais prioritária..." e "prioridade máxima" já passam o sentido de algo crescente. Mas não sou nenhum especialista na lingua portuguesa, só notei essa dica.

Me diga o que você acha, posso colocar aqui se quiser. Obrigado!


My story choosing a linux distro

Published: 2021-11-25

I started using linux when I was 13/14 yo, can't remember exactly, and my HP notebook, with Windows XP, had a BSOD. I was tired of having to install drivers, and many programs by going in their website and downloading then, also, anti-virus was boring, and time to time I had to wipe the HD and start clean because the system was very slow.

So I remember I googled "free operating system", and that's how I heard about linux for the first time. So I started installing distros in my machine. The problem at the time was the internet connection, it was slow, and I had to connect using PPPOE because that was the internet service I had in my house. And this alone made many distros to not work for me, since I could not connect to the internet, I had to find another distro.

I tried debian, knoppix, fedora, mandriva, sabayon (had a beautifull interface), big linux, open suse, gentoo; but I used then only a little. I could only explore what they installed from the live CD, and if I could'nt find the pppoe setting I had to uninstall it and go to the next distro. I had very limited knowledge on how a linux system worked, and which programs came in the live CD, at that time the website I accessed the most was problably distro watch.

First usable distro

Things changed when I installed Mepis linux, it had an easy way to setup the internet. So I could connect using cable and browse the internet.

Mepis Linux desktop. By Treize1101 - Own work, CC BY-SA 3.0

So I explored it, first thing I did was learn to install packages, if you're new to linux, one of the best things about it is package management, and Mepis was debian based. So I learned how to do a sudo apt-get install [pack] and installed compiz. But then I wanted my wifi chip to work, I was using cable and that's inconvenient, thus, I starred at another problem: wifi support and proprietary drivers.

So my piece of advice so far for choosing a linux distro is:

Nowadays things seem better, at least for wifi card support, but buying a laptop 100% linux compatible was something almost impossible 15 years ago.

Hardware support

At that time wifi chips, like my Broadcom BCM4312, were new and drivers were hard to find, my option was to use the windows driver with ndiswrapper, and that was complicated for me. There was a website dedicated to gathering info on wireless cards support on linux, it has changed a little sinc: linux wireless. And that helped me a lot setting up my Broadcom card.

Another issue was my nvidia card, it was a GeForce 6200, really good card, but the only decent driver was the proprietary nvidia driver, which I had to head over to the website, download some weird file and execute it. It sometimes failed to install, and I had to use the distro without 3d acceleration support, at that time the nouveau driver was giving its first steps, so no 3d support, and poor 2d performance. I think it really evolved from that time but I'm not sure if it is good for gaming today. I remember accessing phoronix very oftenly, anxious for benchmarks on the newest driver, Michael Larabel deserves my kudos for all the incredible work he has been doing, and still does, for the linux, open-source and benchmarking comunities.

Ubuntu

Fortunately Ubuntu came next and solved both problems for me, it had the network manager applet that worked like a charm, and the proprietary drivers menu was easy to configure and add the nvidia driver, but after installing I had some issues with the splash screen, no big deal. I could even play gnome games which came pre-installed and Counter Strike using Wine!

That distro also had some quirks, it was unstable, sometimes I broke my system by installing/updating packages, I was doing all sorts of things with it, and had to reinstall it using the CD I bought on a newspaper post. My real problem with ubuntu was upgrading it, it was easy to break by doing a version upgrade, it still is. I was enjoying using ubuntu, but sometimes it pissed me off with outdated software, I needed some new software, like wine, drivers, and ubuntu didn't have the newest versions. So take this into account when choosing your distro, debian packages are much more stable and updated less frequently, hence, debian and debian based distros often contain packages that are slightly outdated. If you need bleeding edge software without having to upgrade the whole system, like me, you may want to use a rolling release distro.

So I went to google again, and searched: rolling release distro, guess what I found...

Arch linux

Arch has a difficult installer, it is text based, and it makes you choose the software. I had to start over a couple of times because I missed some important packages. It uses the pacman package manager (great name), which I think is simply incredible. The good part is that I learned a lot about how a linux system works, how to configure it and what it needed to run. I really miss the BSD style init, now it uses systemd, sadly. The best part about it are the wiki pages, which are really great and complete, also arch has the AUR, archlinux user repository, which contains packages submitted by users. I always find the packages I need there.

The only bad thing is that it not user friendly, so most users do not use it, it comes with the bare minimum (I like it), and lets you choose what to install next. Almost all configuration must be done by hand, and that includes you learning it, so it becomes tedious to use. There are some archlinux based distros which may help on this, so you may want to check them out, I'll stick with arch, for now.

Final remarks

To sum it up, my two cents on what take into account when choosing a linux distro is, the previous points, plus:

I think that having those will make it easier for a user to use its system and build upon it, customize and have a great experience, what makes it different from this on is a matter of choice of software.

And if nothing really pleases you, consider rolling your own distro, or building lfs. I already tried it and it was a great trip, this is, IMHO, the ultimate way to learn how a linux system works. And it does not end here, you can try something totally different too, like a BSD.


The sound of characters hitting the terminal

Published: 2021-11-10

Sometimes I'm coding during the night, and seeing the output of a new program print in the terminal.

A terminal full of characters

But it only happens when the terminal receives a big buffer, doesn't happen when there is a pager, or just a few lines.

So when these letters appear I hear a sound like an impact, if you ever played front mission 4 the sound is like the impact from some shotguns.


My PS1 variable

Published: 2021-11-02

I often use the linux terminal, and one thing that I use to help me on my day is to use the history to see past commands:

history

Then I run the command with !, e.g.: !76. As I do this frequently, one thing that makes this process faster is my shell prompt showing each history number:

prompt

So it's easy to re-run commands. To show the command number on your prompt add \!. To your PS1 definition. Mine one is:

"\u@\w \! $ "

Running and debugging go lambda functions locally

Published: 2021-10-18

A solution without CLIs, Docker or big frameworks; just a binary, a debugger and simple configs.

Debugging locally a lambda function written in Go is not a trivial task, this article is my trial at this subject, it represents the result of a week of intense research and frustration pursuing a thing that should be trivial for any developer: running your code on your machine and attaching a debugger to it. This setup it great for development and essential for building high quality software, but even after days of effort I wasn't able to properly step-through my code, so I gave up and decided to do thing the old way. After trying various approaches with recommended tools like aws-sam-cli and the serverless-framework with no success, I ended up with a very simple setup that let me step-through my code, check values and dig into the function execution, with the real debugging, and it was adopted in the back-end team at my company.

So here it goes. The setup is basically the following:

  1. Build your lambda function with debugging symbols
  2. Run it and attach the debugger
  3. Make a RPC call to it using a client (included here)
  4. Follow the code execution on visual studio
VSCode files needed

Needed software

Make sure your $GOPATH/bin folder is in your path so that VSCode can find them.

Before we start

Just a little brief, a lambda function is basically a RPC (remote procedure call) server, and RPC servers work in a different way: they advertise methods that they have available, and clients call these methods passing its name and the arguments needed, if any. In a lambda function the function exposed is called Invoke() and it's defined on the Function type, so the method called is: Function.Invoke, this function takes only one argument: an InvokeRequest. This type is defined in the aws-lambda-go/lambda/messages package, and it's defined as:

type InvokeRequest struct {
    Payload               []byte
    RequestId             string
    XAmznTraceId          string
    Deadline              InvokeRequest_Timestamp
    InvokedFunctionArn    string
    CognitoIdentityId     string
    CognitoIdentityPoolId string
    ClientContext         []byte
}

Luckily the only field that matters to us is Payload, and it is simply the JSON that will be passed to the lambda as input. The last piece of information is that a lambda function, as a RPC server, listens on a port, this port is chosen at runtime by an environment variable named _LAMBDA_SERVER_PORT. This is the code responsible: GitHub . So we must define it.

Configuration

First we must build our lambda function with debugging symbols, the build command goes like this:

go build -v -gcflags='all=-N -l' your/file.go

The important part is this -gcflags='all=-N -l' which is the flag for turning on the debugging symbols. You may like to add it to your Makefile or whatever you use to build, we will setup a task in VSCode briefly.

Now create the input JSON files your function is supposed to receive, it's convenient to create a folder for them as they tend to multiply, I chose events. Pay attention to the type your function is expecting to receive, some functions take input from different AWS services, so you must adjust the JSON according to that. This received type is defined on your Handler function you pass to the lambda.Start function in the main file, here is an example:

func Handler(req events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) {
    ...
}

In this case the input type for this function is an APIGatewayProxyRequest and the output type is a APIGatewayProxyResponse, that means you input and output JSONs will be of that form. Take a look at the events package to understand the format, as it can be confusing sometimes and can lead you to loose hours trying to get it right.

The launch file

VSCode uses the launch file, in .vscode/launch.json, to configure debugging sessions, here we will declare the needed port for the lambda function and how the debug session is to be setup, this is mine:

{
    "version": "0.2.0",
    "configurations": [
	{
	    "name": "Launch",
	    "type": "go",
	    "request": "launch",
	    "mode": "exec",
	    "program": "${workspaceFolder}/backend",
	    "env": {
		"_LAMBDA_SERVER_PORT": "8080"
	    },
	    "args": []
	}
    ],
    "compounds": [
	{
	    "name": "build and debug",
	    "configurations": ["Launch"],
	    "preLaunchTask": "build-debug"
	}
    ]
}

I chose port 8080 for the lambda, but you can change for whatever you prefer. This compounds field is very convenient: it lets you run a task before starting the debug session, so we point the build-debug task, to build our function for us.

The tasks file

This file, .vscode/tasks.json, is where common build tasks are declared, but you can declare many other things, for example, getting input from the user. Here we will define two things:

This is the tasks.json file I'm currently using:

{
    "version": "2.0.0",
    "inputs": [
	{
	    "id": "json",
	    "type": "command",
	    "command": "filePicker.pick",
	    "args": {
		"masks": "events/*.json",
		"display": {
		    "type": "fileName",
		    "json": "name"
		},
		"output": "fileRelativePath"
	    }
	}
    ],
    "tasks": [
	{
	    "label": "build-debug",
	    "type": "shell",
	    "command": "go build -v -gcflags='all=-N -l' ${file}"
	},
	{
	    "label": "event",
	    "type": "shell",
	    "command": "awslambdarpc -e ${input:json}",
	    "problemMatcher": []
	}
    ]
}

Some explanation here: the masks field is where you point the folder with your JSON events, you can change it at your discretion, this file is then replaced on the ${input:json} part. This is responsible for issuing the RPC request to the running lambda.

And that's all.

Running

Now it's clean and simple: with the .go file open on VSCode:

  1. Click on Run on your sidebar, or type Command+Shift+d, Ctrl+Shift+d on Windows, then select build and run and click run. Now your lambda function will be built and run.
    VSCode debugging pane
  2. Then issue an event to your lambda using the run task command from the terminal bar with Command+Shift+p or Ctrl+Shift+p on Windows.
    Select run task
  3. Select event, a file picker will open to show available options from the events folder.
    Choose event
  4. Select the json you want and press enter, the json will be sent to the lambda function on the session and the debugger will trigger.
    Select which event

After these commands if everything ran well, you should then see something like:

VSCode with breakpoint reached

This setup does not need docker or complicated CLIs with many configuration files, here we just explored already used configuration files with minimal changes, I hope you enjoy using this setup.

Going beyond

This workflow is great for local testing/debugging, but at first sight it can seen complicated, however after using it 2 or 3 times you notice that you'll be much quicker. This small RPC client awslambdarpc can be imported as a library into your code and used to run your test files, using it to run tests can help you validate input/output from your application.

Note

This post was also published in my Medium account.

Misc

This site is a member of the 250Kb and 512Kb clubs.

Some links we like:

      ,_.     ,.eo-ee,   
     .e.e,   e_/e0^o,    
  .,;-o-e.. ,/ee^        
     ,e0\o  /-o0__ee,._  
   ee 0_/e e\o_/__\-e,o^e
   ,,o0o\ ,|v/ee. ^"e-   
,ee_e._, \ //            
        \ V |   _.e.     
        |   /_-ee,_.     
       /\   \-__         
     _/ /  \ \  \_       
  .,wW'^^^//;^-^;^;w_    
  derelict garden webring