Categories
Business Freedom and Privacy Nextcloud Technology Tutorial Ubuntu

Installing Nextcloud on Digital Ocean Ubuntu Server Droplet

Disclaimer: As always, my blogs are not supposed to be a well-written piece of technical literature but instead a better-than-crap version of a messy notepad. I hope it’s clear enough to help someone other than myself but I always check them to make sure at least I can understand it, ha.

It seems that there is a nice blog written and that Nextcloud (NC) is also a ‘snap’ to install on a server.

At some point in the process of learning this I assumed you had to install either NGINX or APACHE2 but, apparently it comes packaged with Apache and no one told me. So good news. Now you know. The other cool thing I learned is that it comes with its own LetsEncrypt easy-installer thing…

I noticed the blog starts by assuming you know how to create a sudo user and do your firewall stuff but let’s not assume that because probably there are others out there like me. Thankfully I’ve already written this for myself elsewhere so I’ll be able to paste those instructions right in this instructional as the first few “Big Steps”.

I will follow this nice blog

Any time you see ‘123.123.123.123’ this is your IP address for your server. Replace it with your actual one, of course.

You should also have a nice password manager set up to save yourself much time and pain as you learn. I recommend keepasxc

Let’s get started

Big Step 1 – Set up the Digital Ocean Droplet

  1. Make sure you already have a domain purchased and reserved for your needs. This is optional but I will assume you’re doing this for tutorial.
  2. Initiate your server / droplet
    I won’t give detailed instructions here because there is lots of documentation out there. I selected one with 2GB of ram (at time of writing it’s $10.00 uSD /month for this) only because the teacher recommended it. I didn’t research hardware specs and what I should do. Your needs may be different. I know using Nextcloud Talk uses a bit of RAM but not sure how much for a large conference.. worth investigating perhaps? I might also recommend calling your subdomain ‘nc’ instead of ‘files’ like I did in my example since you might do much more than ‘files’ with NC. “office.domain” or ‘nc.domain’ or ‘cloud.domain’ might make more sense?
  3. In your domain registrar, point your domain to your server’s name host. In my case it will be digital ocean’s which I will pull from the ‘networking’ section after adding the domain to Digital Ocean. In my case I already had a test domain setup, so what I did was set up a subdomain called “files.domain.com”. The reason you want to do this right away is because
  4. Set up your SSH keys
    Before you install the droplet, make sure your SSH keys are set up properly because you are going to use them a lot in the terminal. Here is some ssh tutorial. Note that it’s a little hard for me to find SSH stuff in the DO backend so for the records it’s at settings / security and then a tab (I think…). If you already have a DO space and SSH keys setup, you are already ready and can use those keys again.
  5. SSH into your new server
    ssh root@123.123.123.123

Note 1: Normally at this point when I install a server I do the usual sudo apt update and sudo apt upgrade but the main install script for Postal does this for you it seems.

But… you don’t have a sudo user yet so here you go:

Big Step 2 – Make a sudo (non-root super user) user

Set up a non-root user account on the server
This is a nice detailed tutorial on this if you’d like here.

Otherwise, this summaries it:

  • Log into the server as root: ssh root@123.123.123.123
  • Make sysadmin user (you can name it what you’d like and don’t have to use ‘sysadmin’): adduser sysadmin and follow the prompts (should be password prompt and then a few questions)
  • Add new user to the sudo list: sudo usermod -aG sudo sysadmin
  • Switch to new sysadmin user: su sysadmin
  • (Optional) Test the new-found powers by running this and then cancelling: sudo dpkg-reconfigure tzdata
  • If this last part worked (and conveniently also set up your time stuff!) then you should be good to move forward with your new non-root user.

Helpful note
If you find you are not getting prompted for the questions after running the adduser command, it might be because you accidentally ran useradd command like me… weird how it works both ways…

From here, follow this blog again and make sure you follow his exact firewall rules. The SSH is already done but the rest seem to be an updated way of doing it, and when I was trying the older way I had nothing but headaches.

I would add these notes to go with the end of his blog where he is talking about checking the non-root user. This is how I had to think about it:

  • To be sure non-root user is working before logging out as root, open new terminal window (so both are open)
  • Try to log in with your new non-root user with ssh sysadmin@123.123.123.123
  • If successful, log out of your root user and continue and you can close the original root terminal window

Big Step 3 – Checking the Ports

Just before beginning the install of Nextcloud via snap, just take a minute to check your ports. I wasted the good part of a day over this.

Before moving on, take a moment to check your ports with a website like this and make sure at least 443 and 80 are open for all the domains you have set up for this box and also the IP address.

Big Step 4 – Follow the Nextcloud Install blog!

Thankfully the same author as the blog above also wrote a blog on how to install the Nextcloud snap. I tried other blogs but they were all dying and I think it’s because something has changed between ubuntu 18.04 and 20.04… not sure, but this nextcloud setup blog was the only one that worked for me.

Some of my notes to go with his great blog:

  • I liked the idea of using a command line to covertly create the user / pass for master admin account, rather than exposing that to the internet and hope no one lands there before you do. It shows that all you have to do is run this:

sudo nextcloud.manual-install <username> <password>
for example
sudo nextcloud.manual-install sammy password

However, when I ran it it hung and never finished:

sysadmin@files:/root$ sudo nextcloud.manual-install sammy password
>

Well, it turns out there must have been something ‘weird’ with the auto-generated password I had. It was loaded with special characters so perhaps one of those characters was not acceptable to the NC database? Not sure, but after I removed special characters from the password (not advised!) it worked fine with the syntax above.

Otherwise, you can go to the non-https web page and configure the admin account yourself via the gui.

Note this: Another way to deal with trusted_domains list

I would also add these notes to go with the ‘trusted_domains’ conversation in the blog. This was a bit new to me and caused me a few headaches over the last few weeks so I wanted to leave a record of ‘another way’ to deal with them.

I was greeted with this issue which i needed to solve.

The blog above has the very simple-looking command to add trusted domains to a list that will allow access to the nextcloud instance. The command is as follows:

sudo nextcloud.occ config:system:set trusted_domains 1 --value=example.com

What was not taught here is what’s going on though. the 1 is the array number that gets logged in a php file somewhere in Whoknowswhereville, USA, so for each one that you add (ie www, ip address, main domain, etc,) be sure to incremement it by one.

The defaul 0 will be ‘localhost’ so start with 1 and work up from there. Note also that you must have all the spaces verbatim. If you don’t have a space after the trusted_domains and after the 1, for example, you’ll have to sudo nano edit your mistake out, ha. So choose your poison. I find the sudo nano method below just as easy now that I know how to do it but I think the this one line command is great too if you are slow and steady.

Without further adoo, if you goof up a trusted_domain entry, you can manually edit the whole thing in the php array.

It took me a while, but I finally found it here:

cd /var/snap/nextcloud/current/nextcloud/config

The file is called ‘config.php’ and to edit it you can just do this and scroll down to the ‘trusted_domains’ array and type them in following the local host example:

sudo nano /var/snap/nextcloud/current/nextcloud/config/config.php

After adding the trusted_domains, I was able to access my NC instance.

Big Step 5 – Secure up the traffic with LetsEncrypt!

Make sure your firewall stuff is done – hint.

Here’s a key lesson: this bad boy doesn’t work with ‘regular lets encrypt’ installation method so if you are in the habit of installing them, slow down and stop. This is the command for installing it with the NC snap:

sudo nextcloud.enable-https lets-encrypt

This was thanks to the blog

But before you actually run that command above, let’s revert back to our friend’s blog above and open up the necessary ports:

sudo ufw allow 80,443/tcp

now go ahead and run the let’s encrypt command above and follow along in the blog to make sure you’re doing it right.

Did you get a failed letsencrypt attempt like I did one time? With a message like this?

To fix these errors, please make sure that your domain name was
   entered correctly and the DNS A/AAAA record(s) for that domain
   contain(s) the right IP address.

If so, you probably have a closed port. See my warnings above.

Big Step 6 – Enjoy

Probably there are some great blogs out there for actually using NC but at least now you can go to your domain and get started.

Have a nice day!

Categories
Nextcloud Technology Tutorial

Fixing My Dead Nextcloudpi box in three simple steps

My nextcloudpi box was kicking back errors and wouldn’t start. It was in a degraded state. I then learned about systemctl --failed command which showed something wrong with certbot. Thankfully I already knew that was related to LetsEncrypt.

I realized that it was trying to renew two domains that weren’t there and were expired and would be unable to renew anyway even if they tried so… not sure why they were there in the first place other than perhaps some testing I did a long time ago? So I wanted to remove them completely to rule them out so here are the instructions for anyone else trying the same thing on Nextcloudpi or another server with ‘random letsencrypt domains’

Part 1 – Removing Random LetsEncrypt Domains and Certificates

Should be as simple as running these commands and replacing yourdomain.com with your domain verbatim.

  1. sudo rm -rf /etc/letsencrypt/archive/yourdomain.com
  2. sudo rm -rf /etc/letsencrypt/live/yourdomain.com
  3. sudo rm -rf /etc/letsencrypt/renewal/yourdomain.com.conf

Part 2 – Disabling the Deleted Site from Apache’s Reach

After doing Part 1 above and removing these random domains that should not have been there I ran sudo systemctl restart certbot.service to restart the certbot service. This seemed to work now. However, when I ran sudo systemctl status it still did not show a green light.

After digging deeper by running a sudo journalctl -xe command the log showed here was a syntax error in /etc/apache2/sites-enabled/000-default.conf. After going to line 43 I realized that it was trying to run one of the domains I had deleted in Part 1 above. Thankfully I already knew how to enable and disable apache sites.

to do that I did:

sudo a2dissite 000-default.conf

Then I did a:

sudo systemctl restart apache2.service

Then I did another check on services:

sudo systemctl status

Nice, the apache stuff now seems fixed but…

Part 3 – Fixing some Wifi-Country.service Error Thing

I still have another error which the system seems not willing to go around! Why? Why me? haha

wifi-country.service loaded failed failed Disable WiFi if country not set

So then I found this blog where i pasted the following to the bottom of the file

country=CA
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
network={
ssid="testing"
psk="testingPassword"

Full disclaimer: I have absolutely no idea what this stuff above does, so yeah – copy and paste at your own risk like I did!

There weren’t any instructions to restart a service which I expected. And, as expect, it was still dead. Thankfully my guess was correct and there should have been a service restart command as follows:

sudo systemctl restart wifi-country.service

That seemed to work…

Then I ran sudo systemctl status and everything looked running. Yay!

Then BOOM! System back. Nextcloudpi started syncing again.

I have no idea why things went bad but it’s a good day when you can fix it indeed! Usually doesn’t go this well for me but I guess I”m learning a few things so let’s keep suffering for growth shall we?

Have a good day!

Categories
Technology Tutorial Ubuntu

Advanced PDF Printing on Ubuntu

I travelled down a road that failed, but I learned some cool things about printing to PDF on Ubuntu that I think are worth logging and sharing for others.

1. Out-of-the-Box PDF Printing (in case you didn’t know)

In case you are new to ubuntu and didn’t know it, basic PDF printing is already installed and working perfectly. All you do is choose ‘print to file’ when you select your printer and PDFs are created. This might be all you need to know from this tutorial.

2. Converting with Ghostscript / ‘convert’ Command

This convert command is powerful. You can convert pretty much whatever image file to PDF. Probably you can convert other files, too. If you don’t believe me how powerful it is, just type man convert and check out the options…

A problem, however, is that it had a significant security issue in the past and so all rights to use the command are disabled by default. If you are running a server probably you should do your own research on the topic but if you are regular user like me you can probably follow the advice I found online to just remove the security policy file / rename it. Here is what I did and it removed all the permission errors I got while trying to run the command:

Move to the directory where the policy is: cd /etc/ImageMagick-6
Move the ‘policy.xml’ file while giving it the a new name (which ultimately just renames it): sudo mv policy.xml policy_disabled.xml

Now you can run the ‘convert’ command and send whatever it is to a pdf:

convert original_image_file.jpg output_pdf_file.pdf

3. Installing a ‘Virtual Printer’ and ‘Printing’ to it graphically

Install the thing: sudo apt install printer-driver-cups-pdf

As soon as it’s done you should be notified that a new ‘printer’ has been installed, just like it was a normal hardware printer. Now you can simply ‘print’ like you normally would to the printer called “PDF”. It will output the files to your home directory in the “PDF” folder.

4. Advanced printing with the command line using the Virtual Printer

Now that you have installed the printer-driver-cups-pdf virtual printer thing above, you now have a ‘fake printer’ in your computer and can now do ‘lpr printing’ in the commmand line. You can learn about what you can do with lpr here but instead what we’re going to do is simply direct the printing to the printer called “PDF”.

So whatever you can do with a regular printer, it seems you can do it now to PDF.

5. Other

There is also this thing which may be of value researching. I didn’t have time but it seems like there may be other value there ps2pdfwr. This is already installed with ubuntu. It takes postscript stuff and makes PDFs. You can type man ps2pdfwr in your terminal to learn more.

Hope this helps.

Categories
Technology Tutorial Ubuntu Ubuntu Touch

RECOVERING FILES USING SCALPEL AND UBUNTU

“Ooops. I deleted my photos”

I used foremost in this blog to try to recover deleted mp4 video files (for my sister) using ubuntu and the software called Foremost, but it wouldn’t find the video files. It found a bunch of photos but not her supposed videos.

Then, I thought maybe the tool wasn’t capable so I tried to figure out: Scalpel which is ‘the other’ major tool besides foremost.

But how to use it?

This video is an absolutely great way to get started doing the basics. Definitely take the time to watch it as it will give you a good birds eye view not just about carving an image file but also how to carve a device (ie. hard drive, usb drive) connected to the machine.

Also, the man scalpel was pretty useful, however, I still couldn’t figure out

a) where the ever-important scalpel.conf file was and
b) how to do mp4 which was (oddly) missing from scalpel.conf

It was pretty surprising that mp4 was missing, that’s for sure…

However, I did find this blog which got me most of the way there in terms of basic learning. Interestingly the author did not make available the following code containing the mp4 header/footer stuff to simply copy and paste. However, I was able to manually type from the screenshot so that myself (and the world) can just copy and paste it. Here it is:

EDIT 200816 – See below the disclaimer before doing this. I was wrong..

mp4 y 30000000:70000000 \x46\x4c\x56\x01\x05\x00\x00\x00\x09\x00\x00\x00\x00\x12\x00\x02\x36

Important Disclaimer: I have no idea if this thing is right, or works, but the program itself, I can verify, ran perfectly and carved for files. No videos were ultimately carved in my situation, but I don’t think that is the fault of this software nor my copy/paste stuff above. It’s because I don’t think there actually were any videos on my drive to start with. But I wanted to be clear that I simply typed out another bloggers stuff and stuck it here.

I was wrong, it seems. I kept getting nothing at all which I thought was weird. Then I just decided to dump the hex stuff above into a hex-to-text editor thing and discovered it translates to “FLV”.. So then what I did was use the same hex editor and wrote ‘mp4’ in there, and got this output:

6d 70 34

Then I updated the scalpel.conf file to look like this:

mp4 y 30000000:70000000 6d 70 34

This started working so I appear on the right path. However files still not playable and came in about 100 small chunks per directory. Need correct scalpel.conf entry to carve an mp4 file… any help appreciated in comments below!

Then I finally found another blog which showed that the scalpel.conf file is located at /etc/scalpel. Now you don’t have to spend an hour to learn that!

The first thing I did once I had what I needed to paste, and knew where to paste it, was edit the scalpel.conf file with the following command, and then copy/paste the above ‘stuff’ at the very bottom of the file and then ctrl+x, etc, to save changes in the file:

sudo nano /etc/scalpel/scalpel.conf

Once that is done, we need to run the command. Even this kind of messed me up with a few of the tutorials and even the man pages I read. Here is the syntax of what you’ll need to start it running with my breakdown of what each component is following:

sudo scalpel -c /etc/scalpel/scalpel.conf -o /path/to/output/diretory/directoryname /path/to/location/of/image.img

Quick explanation of this so you can plug in your details into the command to get carving:

sudo – it requires super user permissions (and your password)
-c – this tag points to the scalpel config file you edited above. If you want, you can move it, but if you didn’t move it this path in my example above is correct
-o – this points to your output path where ou want the recovered files to go
directoryname – at the end of your path add a useful name you’ll remember, as this directory will be created and the recovered files placed inside if it works
/path/to/location/of/image.img – likely you have already cloned the drive and are not using the original (recommended for many reasons) so this is the path to where on your machine this image file is located.
image.img – represents the name of your image, whatever it is.

So once you customize your details and hit ‘go’ you should be able to start carving files. Also, if you want to carve for more than just mp4s you can un-comment the options in the scalpel.conf file, or add others and they will all run at the same time once you start.

I found that Foremost found more .jpg than Scalpel but I might have configured one or both of them incorrectly…

Hope this helps you recover some lost goods!

Categories
Life Skills Technology Tutorial Ubuntu Ubuntu Touch

RECOVERING DATA FROM YOUR ANDROID DEVICE USING FOREMOST AND UBUNTU

“Ooops. I deleted my photos”

Well, I thought this would be a bit easier than it was but such is life – HARD…May this tutorial save many or all of you lots of time. Probably you can do all this with Kali, Debian, or other distributions that have the same commands, but I’m ubuntu so ubuntu it will be.

My goal here was pretty simple. My sister wiped her photos from her phone (or someone she knows…) and they are gone. One would think they would save somewhere in google’s creepy backup network, but seems not so. Perhaps when you press ‘delete’ on your Android device they delete first from your device then from the google servers as they sync? That’s actually good thing for security reasons and privacy – which is why I doubt it works like that (ha). Anyway, I use Ubuntu Touch so I don’t know. But I do know that Nextcloud is awesome and I believe you can set it up for manual backups so it doesn’t happen automatically and you can probably set it up just fine on Android, too. This would be ideal setup so if you accidentally delete stuff like this the deletion won’t make it to the server and you could backup from that…. Anyway, my sister wanted these photos and I like learning so I started to review how to do it from a tutorial I never finished back in 2013. At that time I had wiped all my emails in Thunderbird, and I recall I did successfully get them all back.

The first interesting thing I encountered is that when I plugged the android device in my ubuntu computer – unlike my Ubuntu Touch mobile device – an Android device ‘mounts’ but doesn’t show up as a usual sdb, sdc, sdd, etc drive in File Manager system. The reason for this is that Android apparently uses ‘MTP’ and this was the source of my initial pain. If you are interested, here is a link about my mtp journey, only because I feel I don’t want to waste the time I spent logging it all and perhaps one day it will be useful related info.

Cloning the Drive with ddrescue!

To clone the SD card of the device we are going to use a tool called ddrescue, which annoyingly is called ‘gddrescue’ in the Ubuntu packages, yet, when you run it is ‘ddrescue’, not ‘gddrescue’. I wasted a solid 15 minutes trying to figure that one out…

  1. Install dd rescue on ubuntu: sudo apt install gddrescue
  2. learn how it works and to make sure you have the right tool: man ddrescue (again, not ‘man gddrescue’)(man!)

This blog post was good except that lshw -C disk only shows discs and direct USBs I guess but not SD cards, and since my sd card is in the sd card adaptor directly in my laptop, I’m going to use lsblk which will show the SD cards and their ‘mount points’. If you haven’t seen these before, they typically show up as an entry starting with ‘mmc’.

  1. Identify your SD card mount logical whatever: lsblk
    Mine is ‘mmcblk0’ but yours could be different. If there is a ‘tree’ of partitions, just choose the top level one for the card.

Documentation shows the basic command systax for ddrescue as: ddrescue [options] infile outfile [logfile]

They also had this example:

root# ddrescue -f -n /dev/[baddrive] /root/[imagefilename].img /root/recovery.log

I will change the above example to the following, but you’ll adjust yours accordingly:
sudo ddrescue /dev/mmcblk0 ~/samsungs7.img

What I’m hoping this will do is burn a copy of the sd card to an image file which I will then be able to carve. If you do your research you’ll find that it is always recommended to clone a drive and do forensics work on it rather than do the work directly on the ‘active’ drive. I’ve removed options because I want the most thorough image creation. Disclaimer: I have not studied ddrescue in detail, so always do your own deeper dive if you have time.

Note 1 before you run the command to clone: I quickly found out at this point that my laptop didn’t have enough memory so decided to clone the drive straight to another USB pendrive of same size to keep my laptop free from having a bad situation of running out of memory. However, at the same time I thought I would take my advice and use a drive of the same size of the cloned image. Wrong move. Even though advice online says that you need ‘a drive of the same size or larger’, there must be tiny differences of size between the manufacturers because just as ddrescue was finishing the job of cloning, it stopped and told me the usb drive ran out of memory. I ended up pointing it to my big storage drive. So, my advice is this: output your cloned image to a drive LARGER than the size of the drive that is being cloned. This could save you time, headaches, and confusion.

Note 2 before you begin to clone: be careful that your path for the output image file is a ‘real storage path’ on your machine, and do not accidentally include something such as ‘/dev/sdc’ as you will get the ‘this is not a directory’ warning. If you get this warning simply check the path and adjust accordingly.

The following is my more layman’s example of the command I used successfully showing in SD card input device with the Android data on it (“mmcblk0”), the output image file going to my big storage drive (“HardDriveName”) into a directory on that drive that I have already created (For example a directory called ‘samsung’), and then finally creating a clone of the drive called in that directory (“cloneImageName.img”):
sudo ddrescue /dev/mmcblk0 /media/user/HardDriveName/path/to/place/to/store/clone/cloneImageName.img

Update the above according to your paths, of course, including the mmcblk0 as yours might be different – lsblk to confirm…

When the ddrescue command is running it looked like this for me:

GNU ddrescue 1.22
ipos: 67698 kB, non-trimmed: 0 B, current rate: 43384 kB/s
opos: 67698 kB, non-scraped: 0 B, average rate: 22566 kB/s
non-tried: 15957 MB, bad-sector: 0 B, error rate: 0 B/s
rescued: 67698 kB, bad areas: 0, run time: 2s
pct rescued: 0.42%, read errors: 0, remaining time: 11m
time since last successful read: n/a
Copying non-tried blocks… Pass 1

Carving the files with Foremost!

Good job. We now have the clone .img file and now comes the hopeful recovery stuff. We now will start rippping through the clone to see what we can find.

  1. Get software called foremost: sudo apt install foremost
  2. Learn about it: man foremost

In this case I’ll be trying to carve out picture files. It’s a Samsung and I happen to know all photos are .jpg so i’m going to narrow down this carving to .jpg only, to speed it up.

foremost -i /media/user/HardDriveName/path/to/place/to/store/clone/cloneImageName.img -o /media/user/HardDriveName/path/to/place/to/save/restored/pictures -t jpg -v

You can seen now my input file / target file is the same one we created in the cloning section above:
/media/user/HardDriveName/path/to/place/to/store/clone/cloneImageName.img

The place where we’ll save the carved files is:
/media/user/HardDriveName/path/to/place/to/save/restored/pictures

I stuck -v on the end to make it more obvious (verbose) in the terminal what’s happening. I recommend the same.

When everything is complete, by the way, Foremost creates and saves a file in the carved files directory called ‘audit.txt’. Then, a specific directory is createed called ‘jpg’ which will then house the files. This is nice and I didn’t know this happens before I began so I wasted some time creating directories with custom file-type names. No need. Foremost does this for you.

The process may take a long while so pack a lunch…

Conclusion

These two tools ddrescue (for cloning) and Foremost (for carving) are great and not so hard to use. Foremost successfully found some deleted photos, however, it was unable to find deleted videos. In fact, Foremost didn’t find a single video on the SD card which makes me wonder if it worked or if I did something wrong. I specified the .mp4 extension but nothing. I will continue to look into this, but for now, if you need to recover photos, I can tell you this worked great for me.

In fact, I ended up studying and using a second piece of carving software called scalpel which turned out also to be great, but it seems a little less simple to use. I created an similar instructional blog about scalpel file carving on ubuntu if you’d like to try that one too.

Categories
Freedom and Privacy Tutorial Ubuntu Touch

How to set up Gitlab with Ubuntu Touch 2FA Manager App

I had been using Ubuntu Touch Authenticator app, but I wanted to try out the 2FA Manager app to see how that worked for me. The only problem was that I had never reset my 2FA stuff that I originally set up so this was my first time switching devices. It was easy, but it was also not documented. I hope this simple guide helps and perhaps the steps of this process also apply to other 2FA sites:

In your Gitlab account

  1. Log into your gitlab account
  2. Go to “Settings” under your top right avatar
  3. Go to “Account”
  4. Click the ‘Manage Two Factor Authentication” button
  5. Click the ‘Disable two-factor authentication’ button which will then warn give you this warning “Are you sure? This will invalidate your registered applications and U2F devices.”
  6. Select ‘Ok’ on warning screen

This will now present you with a screen and QR code where you will scan it and bring it into the Ubuntu Touch 2FA Manager app. So now let’s move to your ubuntu touch device…

On your Ubuntu Touch device

  1. Download / install tagger app if you have not already done so
  2. Download install the 2FA Manager app if you have not already done so
  3. Open the 2FA Manager App
  4. Open Tagger app and point camera at QR code and patiently wait… adjust camera position if needed until it registers the QR code. You will know this is working when the dialogue with “open url, copy to clip, generate qr code” buttons pops up
  5. Select ‘open URL’
  6. Add a description. This is just a friendly name that will help you remember what this key is for as you may build up many keys over the years.
  7. Press ‘Add’ button. You should now be taken to the 2FA Manager app to a screen with a 6 digit bold code and a purple progress meter that is moving. You will notice that each time the progress meter reaches the end, it will loop back with a new 6 digit number. You will also notice that if you tap this 6 digit code it will open it up and make it much bigger and easier to read and also give you the ability to copy this code to clipboard. Whichever way you prefer, these are the two ways you can obtain your 2FA codes and these codes are what you will use to log into Gitlab each time, and also to use to finish up your setup in your Gitlab account.
  8. Go back to Gitlab

Meanwhile, back at Gitlab…

  1. Look at your 2FA Manager app and grab the 6 digit code presented on the screen. But hurry! (dramatic music here) If you don’t grab it quick enough the next one will be generated. This is for security if you were wondering and limits the chance of someone stealing your stuff…
  2. Enter the 6 digit code into the ‘pin code’ field on your Gitlab 2FA setup page you should still be on
  3. Select ‘Register with two-factor app’ button
  4. Save your recovery codes: If all goes well you should now be presented with a ‘congratulations’ screen and a list of 10 random looking letters and number strings with bullet points. These are your recovery keys so save them somewhere safe. I recommend the keypassxc app as it’s a really great password manager for your laptop / desktop ubuntu environment.
  5. After you have these recovery codes safely stored, press ‘proceed’. If you got an ‘invalid code’ message, it probably means you weren’t fast enough with steps 1-3 above so repeat but faster 😉

You should now be brought back to your gitlab Account page and good to go. You can probably use this workflow above to do 2FA setups in other systems as well.

Categories
Business PrestaShop Technology Tutorial Ubuntu

Setting Up Prestashop on Digital Ocean Ubuntu Server

The background to this post is that I had a sudden need to ‘figure out shopping carts’. I understood from times past it probably isn’t wise to have a shopping cart that intends to scale on a shared cpanel host, even though it’s pretty dead easy to set that up. I decided this was the day I had to learn how to set up an Ubuntu server myself and face it until I knew enough to do so.

I started with Digital Ocean because I recall testing a ‘droplet’ and being surprised how easy and nice it was. They also support UBports and their servers so I appreciate that. I got a promotional code to try it out and glad I did. I think it’s a really nice solution.

The real purpose of this tutorial is to make sure I don’t forget all the things I learned and also to leave some bread crumbs for others who might want to do the same thing.

I landed on prestashop because it’s very open software, very flexible, been around for many years, and has a bunch of modules that should be quite helpful to roll out store after store.

The idea of this server is that it will run the ‘main business website’ as well as the shopping cart.

Any time you see ‘123.123.123.123’ this is your IP address for your server. Replace it with your actual one, of course.

You should also have a nice password manager set up to save yourself much time and pain as you learn. I recommend keepasxc

Let’s get started

Big Step 1 – Set up the Digital Ocean Droplet

  1. Initiate your server / droplet
    I won’t give detailed instructions here because there is lots of documentation out there. One thing I will say is that i used an Ubuntu 18.04 LTS server with 2GB of ram to make sure I had enough ram to do what’s needed for both the main site and prestashop.
  2. Immediately set up your A records to point to your domain.
    In Digital Ocean (“DO”) it’s pretty easy but I don’t have any links to it. Just go to ‘networking’, add your domain, then create your domains, subdomains, etc, and then point the A record to the Server you just initialed in step 1 above. This step is important to do right away because propagation of the name server to the IP address / box takes an hour or more sometimes depending on where you are.
  3. Set up your SSH keys
    Before you install the droplet, make sure your SSH keys are set up properly because you are going to use them a lot in the terminal. Here is some ssh tutorial. Note that it’s a little hard for me to find SSH stuff in the DO backend so for the records it’s at settings / security and then a tab (I think…)
  4. SSH into your new server
    ssh root@123.123.123.123
  5. Set up a non-root user account on the server
    This is a nice detailed tutorial on this if you’d like here.

Otherwise, this summaries it:

  • Log into the server as root: ssh root@123.123.123.123
  • Make sysadmin user: adduser sysadmin and follow the prompts (should be password prompt and then a few questions)
  • Add new user to the sudo list: sudo usermod -aG sudo sysadmin
  • Switch to new sysadmin user: su sysadmin
  • (Optional) Test the new-found powers by running this and then cancelling: sudo dpkg-reconfigure tzdata

Helpful note
If you find you are not getting prompted for the questions after running the adduser command, it might be because you accidentally ran useradd command like me… weird how it works both ways…

  1. Update the server’s packages
    sudo apt update
    sudo apt upgrade

Big Step 2 – Install Apache

  1. sudo apt install apache2
  2. Check to see if this triggered new packages to upgrade: sudo apt update
  3. If packages need updating: sudo apt ugrade

Big Step 3 – Install PHP

There were about a million different tutorials out there that worked but were pretty hard. At the end of all that I found this which seems to work just fine and dandy, so hopefully it works for you:

  1. Add PPA: sudo add-apt-repository ppa:ondrej/php
  2. Update packages: sudo apt update
  3. Install php (adjusting version and number as you over time and making sure the version is correct for Prestashop / other software): sudo apt install php7.3
  4. Test php version: `php -v’
  5. Add the modules required for Prestashop (adjust for whatever else you are installing): sudo apt install libapache2-mod-php7.3 php7.3 php7.3-common php7.3-curl php7.3-gd php7.3-imagick php7.3-mbstring php7.3-mysql php7.0-json php7.3-xsl php7.3-intl php7.3-zip
  6. Open this file (adjust PHP version as needed / as appropriate): sudo nano /etc/php/7.3/apache2/php.ini
  7. Add these lines of code. Truthfully I don’t know where you are supposed to add them so I added them near the bottom in the first space from the bottom I could find, but not at the very bottom. Adjust as needed:
file_uploads = On
allow_url_fopen = On
short_open_tag = On
memory_limit = 256M
upload_max_filesize = 100M
max_execution_time = 360
date.timezone = America/Chicago
  1. Restart apache2 after doing php changes (now) with this sudo systemctl restart apache2.service (I’m not 100% sure this step is required, but never hurts!)
  2. Test that PHP is working by:
  • Creating this test file: sudo nano /var/www/html/phpinfo.php
  • Adding this info to the file and saving: <?php phpinfo( ); ?>
  • Going to the URL where this file is where you should see a boring PHP info page: 123.123.123.123/phpinfo.php
  • Deleting the file if the test is good: sudo rm /var/www/html/phpinfo.php

Big Step 4 – Install Mariadb

Apparently mariadb is like mysql but getting more love as we move into the future. I’m just following along.

  1. Create a database name and save it to your password manager. I’m going to use ‘presta_db’ for this. You can call it what you’d like.
  2. Create a database user name and a password and do the same in the password manager.
  3. Install mariadb: sudo apt-get install mariadb-server mariadb-client
  4. Do a security upgrade on it and make a password, etc for database: sudo mysql_secure_installation. Follow the prompts with very likely these answers: Enter, Y, password, repeat password, Y, Y, Y, Y

Big Step 5 – Create your Database in Mariadb

Assuming you did the steps above in step 4, you should now be able to log into your database area with your new password you created. I don’t know if you HAVE to use CAPS for the maria commands, but I do just for visual aid, which is probably why we see this around town…

  1. Log into mariadb: sudo mysql -u root -p
  2. Create your database with the database name you saved to your password manager above: CREATE DATABASE presta_db;
  3. Create the user and password for the database replacing username and password in the following code with yours between the apostrophes: CREATE USER 'username_here'@'localhost' IDENTIFIED BY 'new_password_here';
  4. Do whatever this does, lol: GRANT ALL ON presta_db.* TO 'username_here'@'localhost' IDENTIFIED BY 'same_password_here' WITH GRANT OPTION;
  5. Flush stuff: FLUSH PRIVILEGES;
  6. Exit stuff: EXIT;

Big Step 6 – Create the Directory Structure for a Shared Server Environment (Optional)

First a good resource for hosting multiple domains on a single ubuntu server.

I will provide the directory structure I used for both the ‘standalone’ setup and the ‘shared’ setup (multiple domains on one machine). You can choose and adjust how you like.

  1. Make the full path to the place where you plan to put Prestashop on a shared server environment:
  • sudo mkdir -p /var/www/shop.domain.com/public_html
  • repeat this command for each domain, changing the shop.domain.com/ part to whatever is appropriate

On a standalone environment: sudo mkdir -p /var/www/html/domain.com

Bit Step 7 – Set up your Virtual Host Files in Apache for Each Domain (Shared server environment)

This blog really says it all resource, except, you don’t have to mess around with that /etc/hosts stuff that he talks about at the end. I did that and messed things up on DO environment… The key with this section here is to make sure (in whatever way you are most confortable) that you have a ‘domain.com.conf’ file sitting in the /etc/apache2/sites-available/ directory with the correct domain information in that file and saved. If you don’t, you’ll have pain. The nice part is this wasn’t as scary as I first thought it was. The key items you’ll need are probably these for a basic setup, and this worked fine for Prestashop for me:

ServerName domain.com
ServerAlias www.domain.com
ServerAdmin yourname@youremail.com
DocumentRoot /var/www/domain.com/public_html/prestashop

Lets Encrypt (awesome free SSL cert system) will do ‘the rest’ for you in an upcoming step to deal with port 443 / https:// if you were wondering…

Here is a summary of what I do showing two domains on a shared environment getting their own conf files which we can then edit:

Copy default template for domain 1: sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/shop.domain1.com.conf

Copy default template for domain 2: sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/domain2.com.conf

sudo nano edit each of the above configs with:

  1. Uncomment ServerName line and add correct one:
  2. Add ServerAlias for the www entry below it (not showing in default for some reason, so type it in ServerAlias www.domain.com )
  3. Update admin email
  4. Update document root to /var/www/shop.dailynow.ca/public_html
  5. Repeat for conf files for any other domains on server
  6. Reload Apache just because: systemctl reload apache2.service
  7. Disable the default conf file: sudo a2dissite 000-default.conf
  8. Enable domain 1: sudo a2ensite shop.domain1.com.conf
  9. Enable domain 2 (if applicable): sudo a2ensite domain2.com.conf
  10. Enable re-write mode (which drops some code into your conf file above): sudo a2enmod rewrite
  11. Restart Apache again for fun: sudo systemctl restart apache2.service

Big Step 8 – Putting Prestashop on the server

  1. Install the unzip tool for dealing with the Prestashop zip file: sudo apt install unzip
  2. Navigate to a safe place for doing stuff: cd /tmp/
  3. Verify that you are updating the link in the following code to the latest version. You can verify by going to thei releases page, clicking the latest, right clicking on the zip file and selecting copy link location and pasting into url in code here:
  4. Download the goods! wget -c https://github.com/PrestaShop/PrestaShop/releases/download/1.7.6.7/prestashop_1.7.6.7.zip
  5. Create a prestashop empty directory and unzip the file you just downloaded in step 4 into it: unzip prestashop_1.7.6.7.zip -d prestashop/
  6. move up a level to be able to perform next ‘move’ cd ..
  7. Move the whole directory you just made to the appropriate directory you created in ‘Big Step 6″ above. Adjust as required to your situation: sudo mv prestashop /var/www/shop.domain.com/public_html/
  8. Assign the correct ownership to the prestashop directory (adjust path as fitting for your situation): sudo chown -R www-data:www-data /var/www/shop.domain.com/public_html/prestashop/
  9. Assign the correct permissions to the same directory: sudo chmod -R 755 /var/www/shop.domain.com/public_html/prestashop/

Big Step 9 – Lets Encrypt

I feel, although I’m not sure, that this is the right place for this section: after everything is setup and just before installing Prestashop. When you start the Prestashop installation process it passes sensitive data around like passwords so I feel it should be set up here. However, I’m going to write a warning here, and then repeat it later: you must enable SSL in Prestashop itself in the admin area or things are going to go really bad. SUPER Special thanks to whoever you are author of this blog.

At this point, too, you’ll need to make sure that your domain has propagated (which is why I put the nameserver stuff at the very top). If it hasn’t propagated, Letsencrypt will fail it’s process on the last part. Here is how you can check to make sure you’re ready to continue:

  1. From inside your server, check your current ip address with ifconfig
  2. In a new terminal window (not the server itself but your own PC in a different network) ping your domains individually: ping domain1.com ping domain2.com
  3. Make sure the bounce ip address from ping matches the ifconfig ip address of your server / droplet in step 1 above.

If you get a long hang or failed ping, probably your domain hasn’t propagated yet. You’ll have to wait as I don’t know of another way to speed things up, except I believe you can reduce the TTL rate from 3600 down but that may have adverse affects on your server resources. Study this topic yourself if you run into it.

Bonus Tip
You can find out a bunch of cool stuff about your domains and propagation with DIG command dig domainname.com .
This command seems to be more powerful than propagation and can at least tell you if things ‘should’ propagate correctly before it actually propagates.
Here are two cool ones:

  • Find out the name server associated with your domain: dig domain.com NS
  • Check out that an A record is setup and working: dig domain.com A

Install LetsEncrypt

  1. Add the PPA: sudo add-apt-repository ppa:certbot/certbot
  2. Install the certbot: sudo apt install python-certbot-apache
  3. Test the syntax of, what I believe, are the conf files and apache conf files: sudo apache2ctl configtest. If you get an ‘OK’ message, continue. If you change anything in your conf files, don’t forget to reload Apache after sudo systemctl reload apache2
  4. Enable the firewall: sudo ufw enable
  5. MISSION CRITICAL Right here and right now, be sure you allow the ssh rule in the firewall rules, otherwise, you’ll get locked out of your server like I did. Thankfully with DO if you do get locked out you can go in through their virtual console, but still…So, simply run this line of code to open up SSH in the firewall: sudo ufw allow ssh
  6. Let https traffic through so that Letsencrypt can do its thing: sudo ufw allow 'Apache Full'
  7. Check the firewall status which should show active, as well as show BOTH 22/tcp and Apache Full enabled for both ipv4 and ipv6. Don’t continue until these are both present: sudo ufw status
  8. RUN IT sudo certbot --apache -d your_domain -d www.your_domain
  9. Most questions are easy to answer. Choose option 2 for normal and higher security forcing redirects to https. Note! It seems after you choose this it’s forever impossible to connect with regular http:// on port 80. This may be important? Not sure…didn’t affect prestashop.
  10. Probably you should test your domains here to make sure https:// is working. I didn’t write any steps for this as I figured Prestashop will be a good test coming soon…
  11. Repeat steps 8-10 for other domains (ie. domain2.com) if required adjusting as appropriate
  12. When done all your domains, do a dry run renewal to make sure renewal process is working well too: sudo certbot renew --dry-run

Bonus Section! How to get rid of Letsencrypt Stuff When everything goes bad…

Sometimes we make mistakes and things go wrong with SSL certificates. If you want to rule out Letsencrypt and your certs from the equation, it’s not entirely simple to get it out of your system, as I thought it was. So, here is your bonus section in case you need it nearby:

  1. Disable the letsencrypt conf file: sudo a2dissite prestashop-le-ssl.conf
  2. Totally uninstall letsencrypt (‘certbot’) sudo apt remove --purge python-certbot-apache
  3. Purge these dirs of their certs:

rm -rf /etc/letsencrypt
rm -rf /var/log/letsencrypt
rm -rf /var/lib/letsencrypt
rm -rf /path/to/your/git/clone/directory (seemed not applicable)
rm -rf ~/.local/share/letsencrypt (seemed not applicable)

  1. Wipe all the ssl conf files (which stuck around after wiping the above) in /etc/apache2/sites-available/ with something like:
    sudo rm shop.domain.com-le-ssl.conf
    and sudo rm shop.domain2.com-le-ssl.conf
  2. Restart apache: sudo systemctl restart apache2.service
  3. Restart installation process again if applicable sudo apt install python-certbot-apache

Big Step 10 – Installing Prestashop Via Browser

Go to your var/www/domain.com/public_html/prestashop directory and you should now see Prestashop magically start to do stuff if everything above went well.

You will walk through the install steps and connect the database to the mariadb database name, database user, and database password you created in Big Step 5 above. You will also create a shop admin user and password in the process. When the process is finished, it will warn you to delete your install directory. You cannot log in to your admin area without first deleting this, or, renaming this directory.

To delete it:
Change to where directory is: cd /var/www/shop.domain.com/public_html/prestashop
Remove install directory and all its contents: sudo rm -r install/

To rename it:
Change to where directory is: cd /var/www/shop.domain.com/public_html/prestashop
Move it (which actually renames it), giving it whatever name you’d like: sudo mv install/ install_123abc

Now go to your Prestashop admin area which will be something like domain.com/admin128r78575 (some random characters after the word admin). By the way, you can always find this by navigating to the prestashop directory and just looking for it. This is a nice tip if things go bad 😉

Log in with your shop admin user / password

STOP! Enable and Force SSL HTTPS:// NOW! Do not wait, wonder, etc. If you don’t enable this, things will be bad because your letsencrypt SSL certs are set up and forcing HTTPS:// traffic meaning you’ll never be able to see your shop from the outside without this step. Here is the link again to the post

Note If you happen to get this error while installing Prestashop, you can safely continue as it reportedly does not mess anything major up:

PrestaShop 1.7.5.0 install reports that:
To get the latest internationalization data upgrade the ICU system package and the intl PHP extension”

This is a post I found which indicated this.

Oh, you probably want to rename your Presta-generated admin login directory because it probably made a weird one like admin37dumj777 which means you’d have to log in to your back end at domain.com/admin37dumj777. I did this with this command: sudo mv /admin37dum777 newadmindirectoryname/ which thus ‘moves’ it, or more accurately, renames it and doesn’t move it at all since we didn’t actually change its location. Nice trick, eh?? Now you can log in at domain.com/newadmindirectoryname/

Concluding Comments

I’m tired. That was hard. I hope it helped at least one person besides me. Have a nice day!

Categories
Tutorial Ubuntu Touch

Flashing Ubuntu Touch to your Pinephone’s EMMC memory

You might have already read my first round of ‘flashing 101’ when I successfully flashed Ubuntu Touch to and SD card making it possible to boot the Pinephone that way with Ubuntu Touch. That first blog is a bit more messy because I was learning so in fact, this post here might also help you with flashing to an SD card as it should be more clearly written. If I had more time I’d clean them both up but I don’t so yeah…

The next step in my learning is to figure out how to flash / copy ubuntu touch onto the pinephone’s EMMC (built in) memory. This should make it run faster. There wasn’t a step-by-step that I could find and so I guess I have to write it. Nice folks in the pinephone telegram group said that I would follow a similar method as flashing to the SD card (see my detailed step-by-step on that here so that hopefully is true. Let’s try…

1. Plug in your Pinephone

Unlike my other tutorial, you’ll need the device plugged into your production machine from where the image will be coming.

2. Get latest image

Choose the latest Pinephone image from this page. I choose stable usually but as of today ‘stable’ is a figment of our imagination 🙂 Save it somewhere memorable and probably smart to put it in a dedicated directory so you can run commands more simply and safely. I just realized there is a bmap file sitting with this image download on this page. This could be useful and relevant soon as you read on…

3. Get setup with BMAP

You can just follow my detailed blog if you’d like. If you don’t know what it is, then you probably want to do this.

4. Confirm the source of your ubuntu-touch-pinephone.img.xz file

The file you downloaded in step 2 above which you have on your computer. In my case it’s going to flash from my download directory in my dedicated ‘pinephone_flashing’ directory so mine looks like this:

~/Downloads/pinephone_flashing/ubuntu-touch-pinephone.img.xz

5. Put Pinephone into ‘Yumi Mode’

I didn’t know what else to call this so I named it ‘Yumi Mode’ since that’s the name of the UBports Robot Mascot… I tried to simply plug in the pinephone and do this work but it won’t recognize anything. Thankfully I had read some other tutorial about how to get into this mode. Here’s how

a) Assure device is powered off (you can push / hold power button for about 5 seconds)
b) Press / hold power and volume up buttons together for about 6 or 7 seconds.

If successful, you should start to see a trainload of partitions start to load in your file manager (Nautilus).

6. Identify the destination path to the EMMC

If you’ve done my other blog where you install Ubuntu Touch to the SD card this time we’ll do the same thing but point the destination to the EMMC memory.

  • Hit the super key to the left of the space bar
  • Type ‘disks’ to open the ‘Disks’ utility. You should now see two items related to UT in the left pane of the ‘Disks’ app:

‘Drive | JumpDriv e microSD’

and

‘Drive | JumpeDriv e eMMC’

  • Click the ‘Drive | JumpeDriv e eMMC’ on the left pane
  • Take the path from the ‘Device’ path. In my case it looks like this “Device dev/sdc” but yours can – and very possibly will be – slightly different in the last letter. You can actually just highlight it and copy it from there

path: /dev/sdc

This is a great way to visually identify and confirm which mount point is your eMMC which will help you quickly deal with unmounting stuff in the next section

7. Unmount the partitions on the eMMC drive

If the drive is ‘fully ejected’ by Ubuntu, you will probably get this error when you run the final flashing command:

bmaptool: ERROR: cannot open destination file ‘/dev/sdc’: [Errno 123] No medium found: ‘/dev/sdc’

A reboot of the device again and putting it into ‘Yumi Mode’ again might fix it for you if you are lucky – once this worked for me – but that was it. I do not believe this can be done reliably and thus you should proceed with the command line method. It’s really not that hard if you work together with the Disks utility.

The reason why the ‘eject’ button in File Manager (Nautilus) doesn’t work for this purpose is because ‘it uses udisks ctl which both unmounts and ejects the partition’. We only want unmount, not eject, you see. So, we must ‘terminally unmount’ these as follows:

First, run the mount command which will show you (truthfully) what drives are (actually) mounted on the system. For me these were all located at the very bottom of the terminal output list.

You can, however, in the Disks utility, click through each partition and see at the bottom ‘mounted’ or ‘not mounted` or ‘unkown’ and compare against the terminal output to gain further confidence that you are unmounting and flashing to the right place. The Pinephone image has a mixture of all of these, it seems, so probably better to just manually unmount everything with the terminal. It’s really easy once you identify the actually-mounted drives you just run:

sudo umount /dev/sdc2' sudo umount /dev/sdc4′
etc
etc

**Tip **: If while running the final flashing commands you get this error:

bmaptool: ERROR: the image file ‘/home/wt/Downloads/ubuntu-touch-pinephone.img.xz’ has size 826.3 MiB and it will not fit the block device ‘/dev/sdc3’ which has 1.0 KiB capacity

You are probably writing to a partition like I did instead of the actual device. You probably put in /dev/sdc3 instead of /dev/sdc as an example.

8. Run the flashing command

This assumes you have alread set up and read and understand my blog above.

Format:

sudo bmaptool copy --bmap ~/path/where/your/bmap/file/is/located /path/where/your/image/is/located /path/to/memory/device

My actual example:

sudo bmaptool copy --bmap ~/Downloads/pphone.bmap ~/Downloads/ubuntu-touch-pinephone.img.xz /dev/sdc

Worked for me! Hope it works for you too 🙂

Categories
Technology Tutorial Ubuntu Touch

How to do an Ubuntu Touch Video Screen Cast Recording

Making a video screen cast of your Ubuntu Touch (“UT”) device is a bit difficult as of the date of this blog post, however, thankfully, there is a way to do it with a bunch of scary terminal commands.

This is pretty technical, but is probably the only way (for now) to do this. Special thanks to Amolith who forwarded to Joe (“in here”) who then forwarded it to me. The power of free software communities at work!

First, I’ll just give you the commands in case you are already familiar enough with terminals and commands. Then, I will provide detailed how-to to follow.

Record a big raw video file in interesting .raw format

EDIT 20/06/21
This first command has been updated to reduce ‘terminal weirdness’. Now it works quietly in the terminal. Special thanks to Rodney for that excellent improvement.

adb exec-out timeout 120 mirscreencast -m /run/mir_socket --stdout --cap-interval 2 -s 384 640 > ubuntu_touch.raw

Convert it to ‘normal’ format (mp4) with ffmpeg tool

ffmpeg -f rawvideo -pix_fmt rgba -s:v 384x640 -i ubuntu_touch.raw -filter:v "setpts=2*PTS" ubuntu_touch.mp4

I plan to give a few more how-to details next by editing this blog post but for now I at least wanted it logged somewhere where myself and others can easily find it.

Enjoy

Tutorial – How to actually do it.

Before you begin

I would recommend creating a directory / folder just for doing this stuff. Then, you will navigate in your terminal to this directory. That makes sure that all the video is dumping in a place where you’ll be able to find it and work with it easier. Otherwise, your terminal might autonomously dump it an undesired location (your Home directory if you left everything default…). Once you have created this directory, navigate to it with the terminal’s ‘cd’ command. If you don’t know how to do that, take a few minutes to search and learn? It’s a good life skill and you can brag about it at the coffee machine or water cooler…

Now that you have the scary command line codes above ready to copy and paste, here’s how you do it and what to expect.

First, you’ll have to make sure you have adb setup on your ‘production machine’ (the machine that is recording the Ubuntu Touch output video. You should be ok to just do the usual sudo apt install adb and then perhaps you’ll need to be in ‘Developer mode’ to do all this. You might find that your system already has adb. If so, it will tell you that your version is up to date, etc and you can continue. If you need either of these or find it’s not working right, this blog has a lot of good info in one place. Most should be current still at the time of this post.

Next, plug in your UT device and get ready to record. Probably a quick test video is wise, by the way, before you start recording a long meaningful video.

Next, run the first command above by copying/pasting into your terminal. You will see this (or something very similar) in the terminal. Just pretend this means “your raw video is now recording to your production machine” because that’s what’s happening even though you can’t see it:

daemon not running; starting now at tcp:5037

daemon started successfully

When you are done recording, go back to your terminal and press ‘control C’ in to stop the process. You will see absolutely nothing except your ^C. Just pretend this means ‘Your raw video file has now stopped recording and is sitting in the directory where you started it”.

Finally, you’ll need to convert the raw video to a human-usable format – and probably mp4 is what you’re looking for which is why the above command is setup the way it is. Note that if you want to do any other million different things with your video work, you could study the power of ffmpeg and do whatever you like with formats, resolutions, etc, etc….

Now, in your terminal after you run the converting command, you are going to see a lot of ‘stuff’ in your terminal. I’m not going to even paste it here because there is so much. I’m also not going to pretend I know what any of of it means. Just be aware this is ‘normal’.

Once complete you should now have a .raw video and a .mp4 video in your screen casting directory.

Double click your .mp4 file to make sure it’s working and enjoy your new Ubuntu Touch video screen recording.

Categories
Life Skills Parenting Technology Tutorial Ubuntu

Making Roblox Work on Ubuntu in Windows 7 on Virtual Box

IMPORTANT! THIS BLOG POST IS BEING TESTED NOW AND NEEDS SOME WORK. I EXPECT TO UPDATE / IMPROVE THIS AT LEAST ONE MORE TIME. IT’S HERE JUST FOR TESTING PURPOSES. NOTE ALSO THAT AS OF TODAY, EVEN IF YOU GET THIS ALL DONE YOU COULD GET THE SAME ‘KICKED BECAUSE OF WEIRD BEHAVIOUR’ MESSAGE (OR WHATEVER IT’S CALLED). OF COURSE, FEEL FREE TO TRY IT OUT AND LEAVE COMMENTS WHILE I’M ALSO CHECKING IT! 🙂

0. Background

Kids wanted roblox. I hate windows. Roblox only works on Windows. Therefore by deduction, I also hate Roblx. They made their setup so you can play only on windows, android and maybe ios (never checked). But no linux.. what? Serious? A goofy downloadable plugin-app-game kind of thing in 2020? But let’s move on. The fact is, I compromised and made a concession that this setup will be only for this one box for this one purpose and that’s it. I had an old windows 7 machine sticker on one of my ubuntu machines.

1. Find a windows 7 cd rom or some ISO..somewhere…somehow…

Actually this step probably took a whole day. Hadn’t done this in years and windows is so lame that you have to buy their operating system (which isn’t worth paying for) and yeah. So I found some random link online and downloaded windows 7 .iso file professional to match my windows sticker (legitimate key). The fact they even made it hard to download something you already paid for was additional fuel for my Windows fire…

2. Download Virtualbox on Ubuntu

I think this is in the software centre in most Ubuntu Distros. Just search it, install it. Tip: in ubuntu software centre you need to type the whole word for it to show up easily, so ‘virtualbox’ instead of ‘virtual box’

3. Install Windows on Virtualbox

Just start up a “new” machine and point it to your downloaded ISO above. Do the usual windows install that we used to do back when we were slaves…I just accepted all the default suggestions for setting up the box and then adjusted them later. This helps assure a successful install, I believe.

4. Install Guest Additions

This section I’m breaking into two pieces because I’m not 100% sure what’s best. I ‘think’ it depends on what Windows you are using as to whether you need to install guest additions in safe or ‘regular / unsafe’ mode. At least, that’s what my hours of web-searching taught me… So, you can ‘try’ the regular unsafe mode (skip ahead) or, you can do ‘safe mode’ which takes longer and is more annoying. In either case, some of the steps / process might help you along the way so maybe worth a quick read.

A. Installing in ‘Safe’ Mode [this section needs checking / testing]

If you already tried installing guest additions in ‘unsafe mode’you might need to remove guest additions before trying again in safe mode. That’s what I did, anyway. Let’s get this done:

  • When windows is booting you press F8.
  • Choose ‘safe mode with networking’
  • In the Virtual box menu in guest machine window, go to the ‘Devices’ menu
  • Insert Guest additions from bottom of the drop down menu
  • go to ‘start’ menu then ‘my computer’ and the CD rom (in windows)
  • in the ROM directory, double click ‘vboxwindowsadditions-amd64’ (assuming you are 64 architecture…) and a wizard should start
  • Check the ‘direct3D support (experimental)’ checkbox
  • Click ‘install’
  • You may get ‘trust Oracle?’ messages. Even if they can’t be trusted it’s easier to check the box and move on. After all, this is already a highly questionable game and enterprise…
  • Reboot? yes
  • I have notes that said I got a message like “Accept ‘basic 3D’ but I can’t confirm. If you get this, I think you should accept it…
  • After machine comes back, skip ahead and do all the 2D and 3D steps in section below

Here is a helpful [link](this link helped: https://forums.virtualbox.org/viewtopic.php?t=55226) by the way

B. Installing in ‘Unsafe’ Mode

This part got me bad. I also had no idea about ‘Guest Additions’ somehow, so this turned out to be a good learning experience. What ‘guest additions’ does is basically install this big package which gives you more direct and quality connections to the host machines hardware. Before installing I was getting all sorts of video card driver errors. When I opened Roblox Studio it was asking to upgrade to OpenGL 2.0 or higher.

To do this step it was as simple as going to ‘Devices’ and ‘install guest additions’ and walking through the steps. Then it opened a wizard on Windows and walked through the install of the guest addition stuff. Finally it asked for a reboot and when it came back things were already working a bit better. But I was still getting driver errors on Roblox Studio…this ultimately froze the program and demanded to close program which I did. I noticed also that in my ‘device manager’ and then ‘display adaptor’ now it’s listing ‘virtualbox graphics adapter’ which should be best since it’s grabbing host hardware. And this is why I ended up doing all the steps in the ‘safe mode’ section above…

check to see if 3D acceleration is enabled by opening ‘run’ and typing ‘dxdiag’. This link will help if that sounds hard. You should see 3D acceleration as ‘enabled’.Try a round of Roblox? 🙂

5. Enable 3D acceleration in Virtualbox

This one sucked another hour or two of my short life so hopefully this can save you the pain. After doing all of the above I was still getting error after error. In my ‘Directx’ settings I was getting ‘direct3d not available’ messages and another setting ‘not available’. I assumed that Virtualbox would have installed 3D acceleration stuff by default but that was a bad assumption because probably Virtualbox is used by a lot of non-gaming developers who don’t need it nor the drain on the host hardware resources. Anyway, there is likely a good reason for it but the 3D acceleration wasn’t enabled. To enable it, shut down the guest machine, go to ‘settings’ (yellow cogwheel) then ‘display’ then check the 2d and 3d acceleration checkboxes (Not sure if i need 2D but I just wanted to be sure. Probably you should do section 6 below too before starting machine and save a step. Video card stuff may also be linked to the dreaded ‘roblox kicked unexpected client behavior’ message…

A helpful link about 3d acceration stuff.

6. Boosted video memory

I also noticed an ‘invalid setting’ in virtualbox saying that I was less than 27MB of video memory so I raised it from 16MB up to 32 to see if that made things better in the settings of the guest machine.

7. Overcoming the ‘roblox kicked unexpected client behavior’ issue

Frankly, I don’t have the answer yet but working on it. It ‘seems’ unsolvable for both Wine and Virtualbox in Ubuntu but I don’t quit easily. For now it would be nice to have others help on this one since I did all the heavy lifting. I feel there might be a browser hack or some other simple work around to stop the player from getting kicked for no reason.