$ sudo -u postgres psql postgres=# create database nextcloud; postgres=# create user nextcloud with encrypted password 'pass'; postgres=# grant all privileges on database nextcloud to nextcloud;
Prepare nginx
Create /etc/nginx/sites-available/nextcloud.conf with the following contents
upstream php-handler { server unix:/var/run/php/php-fpm.sock; }
# Set the `immutable` cache control options only for assets with a cache busting `v` argument map $arg_v $asset_immutable { "" ""; default "immutable"; }
server { listen 80; listen [::]:80; server_name drive.YOUR_DOMAIN.com;
# Path to the root of your installation root /var/www/nextcloud;
# Prevent nginx HTTP Server Detection server_tokens off;
# HSTS settings # WARNING: Only add the preload option once you read about # the consequences in https://hstspreload.org/. This option # will add the domain to a hardcoded list that is shipped # in all major browsers and getting removed from this list # could take several months. #add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload" always;
# set max upload size and increase upload timeout: client_max_body_size 512M; client_body_timeout 300s; fastcgi_buffers 64 4K;
# Pagespeed is not supported by Nextcloud, so if your server is built # with the `ngx_pagespeed` module, uncomment this line to disable it. #pagespeed off;
# The settings allows you to optimize the HTTP2 bandwitdth. # See https://blog.cloudflare.com/delivering-http-2-upload-speed-improvements/ # for tunning hints client_body_buffer_size 512k;
# Remove X-Powered-By, which is an information leak fastcgi_hide_header X-Powered-By;
# Specify how to handle directories -- specifying `/index.php$request_uri` # here as the fallback means that Nginx always exhibits the desired behaviour # when a client requests a path that corresponds to a directory that exists # on the server. In particular, if that directory contains an index.php file, # that file is correctly served; if it doesn't, then the request is passed to # the front-end controller. This consistent behaviour means that we don't need # to specify custom rules for certain paths (e.g. images and other assets, # `/updater`, `/ocm-provider`, `/ocs-provider`), and thus # `try_files $uri $uri/ /index.php$request_uri` # always provides the desired behaviour. index index.php index.html /index.php$request_uri;
# Rule borrowed from `.htaccess` to handle Microsoft DAV clients location = / { if ( $http_user_agent ~ ^DavClnt ) { return 302 /remote.php/webdav/$is_args$args; } }
# Make a regex exception for `/.well-known` so that clients can still # access it despite the existence of the regex rule # `location ~ /(\.|autotest|...)` which would otherwise handle requests # for `/.well-known`. location ^~ /.well-known { # The rules in this block are an adaptation of the rules # in `.htaccess` that concern `/.well-known`.
# Let Nextcloud's API for `/.well-known` URIs handle all other # requests by passing them to the front-end controller. return 301 /index.php$request_uri; }
# Rules borrowed from `.htaccess` to hide certain paths from clients location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)(?:$|/) { return 404; } location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) { return 404; }
# Ensure this block, which passes PHP files to the PHP process, is above the blocks # which handle static assets (as seen below). If this block is not declared first, # then Nginx will encounter an infinite rewriting loop when it prepends `/index.php` # to the URI, resulting in a HTTP 500 error response. location ~ \.php(?:$|/) { # Required for legacy support rewrite ^/(?!index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+|.+\/richdocumentscode\/proxy) /index.php$request_uri;
fastcgi_split_path_info ^(.+?\.php)(/.*)$; set $path_info $fastcgi_path_info;
$ sudo apt-get install postgresql postgresql-contrib $ sudo -u postgres psql postgres=# create database git; postgres=# create user git with encrypted password 'pass'; postgres=# grant all privileges on database git to git; postgres=# create database woodpecker; postgres=# create user woodpecker with encrypted password 'pass'; postgres=# grant all privileges on database woodpecker to woodpecker; postgres=# exit
nginx
Save the following as sudo vim /etc/nginx/sites-available/gitea.conf
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
server { listen80; server_name git.YOUR_DOMAIN.com;
Open /etc/systemd/system/gitea.service and uncoment postgresql lines.
Start the service
1
$ sudo systemctl start gitea
open http://IP_ADDRESS:3000 and configure the database settings.
Open /etc/gitea/app.ini and and change ROOT_URL to git.YOUR_DOMAIN.com.
1
$ sudo systemctl restart gitea
Now visit http://git.YOUR_DOMAIN.com/admin/applications.
Add Woodpecker application with Redirect URI http://woodpecker.YOUR_DOMAIN.com/authorize and copy the client id and the secret somewhere as we will need then in next steps.
Create file /etc/systemd/system/woodpecker.service and paste the following
The last environment variable is added because there is a bug and that seting that seems to be an workaround. Start woodpecker service.
1
$ sudo systemctl start woodpecker
Now visit http://woodpecker.YOUR_DOMAIN.com. it should redirect you to http://git.YOUR_DOMAIN.com for authentication. Ideally it should be done by now. But the problem is it will crash due to this issue. The workaround is to change WOODPECKER_GITEA_URL to http://woodpecker.YOUR_DOMAIN.com:3000. For some reason, woodpecker server crashes when gitea is behind nginx proxy.
If woodpecker crashes then do the following to get the backtrace
Now if the woodpecker server is working then we can proceed further. We need to change the gitea port because woodpecker-agent listens to 3000 which will already be in use by gitea. So, open /etc/gitea/app.ini and change the HTTP_PORT to 4000. Open the nginx configuration file /etc/nginx/sites-enabled/gitea.conf and change 3000 to 4000. Now restart both services and check whether everything is working by opening git.YOUR_DOMAIN.com.
Open /etc/gitea/app.ini and update ROOT_URL=https://git.YOUR_DOMAIN.com/ then restart gitea service. Open /etc/woodpecker.conf and update WOODPECKER_HOST and WOODPECKER_GITEA_URL with https urls. Restart woodpecker service. Visit http://git.YOUR_DOMAIN.com/admin/applications again and update the Redirect URL with https.
I often need to check for inconsistent capitalization in my tex files. So listing all the consecutive capitalized words and characters helps me to decide which one is intentional capitalization and which one is not. The following bash script has two functions can lists all terms (Capitalized Phrase) and acronyms used throughout the input file.
1 2 3 4 5 6 7 8 9 10
$ terms filename.tex 19 Cloud Station 9 Sensor Gateway 7 Sensor Cloud Infrastructure ... $ acronyms filename.tex 34 VM 13 IaaS 13 CPU ...
Rumal is a C++ library that can generate HTML/CSS/Javascript code from significantly identical C++ syntax. Currently it uses std::string which is supposed to be replaced with compile time strings. Injecting placeholders, is also planned but not yet implemented. This will make it usable as a template engine.
I am working on Tash which is an Open Source C++ library for ArangoDB Database which includes APIs for HTTP based document access and a query builder for AQL (Arango Query Language). These are a few example usages.