Categories
Business Freedom and Privacy Nextcloud Technology Tutorial Ubuntu

Installing Nextcloud on Digital Ocean Ubuntu Server Droplet

Disclaimer: As always, my blogs are not supposed to be a well-written piece of technical literature but instead a better-than-crap version of a messy notepad. I hope it’s clear enough to help someone other than myself but I always check them to make sure at least I can understand it, ha.

It seems that there is a nice blog written and that Nextcloud (NC) is also a ‘snap’ to install on a server.

At some point in the process of learning this I assumed you had to install either NGINX or APACHE2 but, apparently it comes packaged with Apache and no one told me. So good news. Now you know. The other cool thing I learned is that it comes with its own LetsEncrypt easy-installer thing…

I noticed the blog starts by assuming you know how to create a sudo user and do your firewall stuff but let’s not assume that because probably there are others out there like me. Thankfully I’ve already written this for myself elsewhere so I’ll be able to paste those instructions right in this instructional as the first few “Big Steps”.

I will follow this nice blog

Any time you see ‘123.123.123.123’ this is your IP address for your server. Replace it with your actual one, of course.

You should also have a nice password manager set up to save yourself much time and pain as you learn. I recommend keepasxc

Let’s get started

Big Step 1 – Set up the Digital Ocean Droplet

  1. Make sure you already have a domain purchased and reserved for your needs. This is optional but I will assume you’re doing this for tutorial.
  2. Initiate your server / droplet
    I won’t give detailed instructions here because there is lots of documentation out there. I selected one with 2GB of ram (at time of writing it’s $10.00 uSD /month for this) only because the teacher recommended it. I didn’t research hardware specs and what I should do. Your needs may be different. I know using Nextcloud Talk uses a bit of RAM but not sure how much for a large conference.. worth investigating perhaps? I might also recommend calling your subdomain ‘nc’ instead of ‘files’ like I did in my example since you might do much more than ‘files’ with NC. “office.domain” or ‘nc.domain’ or ‘cloud.domain’ might make more sense?
  3. In your domain registrar, point your domain to your server’s name host. In my case it will be digital ocean’s which I will pull from the ‘networking’ section after adding the domain to Digital Ocean. In my case I already had a test domain setup, so what I did was set up a subdomain called “files.domain.com”. The reason you want to do this right away is because
  4. Set up your SSH keys
    Before you install the droplet, make sure your SSH keys are set up properly because you are going to use them a lot in the terminal. Here is some ssh tutorial. Note that it’s a little hard for me to find SSH stuff in the DO backend so for the records it’s at settings / security and then a tab (I think…). If you already have a DO space and SSH keys setup, you are already ready and can use those keys again.
  5. SSH into your new server
    ssh root@123.123.123.123

Note 1: Normally at this point when I install a server I do the usual sudo apt update and sudo apt upgrade but the main install script for Postal does this for you it seems.

But… you don’t have a sudo user yet so here you go:

Big Step 2 – Make a sudo (non-root super user) user

Set up a non-root user account on the server
This is a nice detailed tutorial on this if you’d like here.

Otherwise, this summaries it:

  • Log into the server as root: ssh root@123.123.123.123
  • Make sysadmin user (you can name it what you’d like and don’t have to use ‘sysadmin’): adduser sysadmin and follow the prompts (should be password prompt and then a few questions)
  • Add new user to the sudo list: sudo usermod -aG sudo sysadmin
  • Switch to new sysadmin user: su sysadmin
  • (Optional) Test the new-found powers by running this and then cancelling: sudo dpkg-reconfigure tzdata
  • If this last part worked (and conveniently also set up your time stuff!) then you should be good to move forward with your new non-root user.

Helpful note
If you find you are not getting prompted for the questions after running the adduser command, it might be because you accidentally ran useradd command like me… weird how it works both ways…

From here, follow this blog again and make sure you follow his exact firewall rules. The SSH is already done but the rest seem to be an updated way of doing it, and when I was trying the older way I had nothing but headaches.

I would add these notes to go with the end of his blog where he is talking about checking the non-root user. This is how I had to think about it:

  • To be sure non-root user is working before logging out as root, open new terminal window (so both are open)
  • Try to log in with your new non-root user with ssh sysadmin@123.123.123.123
  • If successful, log out of your root user and continue and you can close the original root terminal window

Big Step 3 – Checking the Ports

Just before beginning the install of Nextcloud via snap, just take a minute to check your ports. I wasted the good part of a day over this.

Before moving on, take a moment to check your ports with a website like this and make sure at least 443 and 80 are open for all the domains you have set up for this box and also the IP address.

Big Step 4 – Follow the Nextcloud Install blog!

Thankfully the same author as the blog above also wrote a blog on how to install the Nextcloud snap. I tried other blogs but they were all dying and I think it’s because something has changed between ubuntu 18.04 and 20.04… not sure, but this nextcloud setup blog was the only one that worked for me.

Some of my notes to go with his great blog:

  • I liked the idea of using a command line to covertly create the user / pass for master admin account, rather than exposing that to the internet and hope no one lands there before you do. It shows that all you have to do is run this:

sudo nextcloud.manual-install <username> <password>
for example
sudo nextcloud.manual-install sammy password

However, when I ran it it hung and never finished:

sysadmin@files:/root$ sudo nextcloud.manual-install sammy password
>

Well, it turns out there must have been something ‘weird’ with the auto-generated password I had. It was loaded with special characters so perhaps one of those characters was not acceptable to the NC database? Not sure, but after I removed special characters from the password (not advised!) it worked fine with the syntax above.

Otherwise, you can go to the non-https web page and configure the admin account yourself via the gui.

Note this: Another way to deal with trusted_domains list

I would also add these notes to go with the ‘trusted_domains’ conversation in the blog. This was a bit new to me and caused me a few headaches over the last few weeks so I wanted to leave a record of ‘another way’ to deal with them.

I was greeted with this issue which i needed to solve.

The blog above has the very simple-looking command to add trusted domains to a list that will allow access to the nextcloud instance. The command is as follows:

sudo nextcloud.occ config:system:set trusted_domains 1 --value=example.com

What was not taught here is what’s going on though. the 1 is the array number that gets logged in a php file somewhere in Whoknowswhereville, USA, so for each one that you add (ie www, ip address, main domain, etc,) be sure to incremement it by one.

The defaul 0 will be ‘localhost’ so start with 1 and work up from there. Note also that you must have all the spaces verbatim. If you don’t have a space after the trusted_domains and after the 1, for example, you’ll have to sudo nano edit your mistake out, ha. So choose your poison. I find the sudo nano method below just as easy now that I know how to do it but I think the this one line command is great too if you are slow and steady.

Without further adoo, if you goof up a trusted_domain entry, you can manually edit the whole thing in the php array.

It took me a while, but I finally found it here:

cd /var/snap/nextcloud/current/nextcloud/config

The file is called ‘config.php’ and to edit it you can just do this and scroll down to the ‘trusted_domains’ array and type them in following the local host example:

sudo nano /var/snap/nextcloud/current/nextcloud/config/config.php

After adding the trusted_domains, I was able to access my NC instance.

Big Step 5 – Secure up the traffic with LetsEncrypt!

Make sure your firewall stuff is done – hint.

Here’s a key lesson: this bad boy doesn’t work with ‘regular lets encrypt’ installation method so if you are in the habit of installing them, slow down and stop. This is the command for installing it with the NC snap:

sudo nextcloud.enable-https lets-encrypt

This was thanks to the blog

But before you actually run that command above, let’s revert back to our friend’s blog above and open up the necessary ports:

sudo ufw allow 80,443/tcp

now go ahead and run the let’s encrypt command above and follow along in the blog to make sure you’re doing it right.

Did you get a failed letsencrypt attempt like I did one time? With a message like this?

To fix these errors, please make sure that your domain name was
   entered correctly and the DNS A/AAAA record(s) for that domain
   contain(s) the right IP address.

If so, you probably have a closed port. See my warnings above.

Big Step 6 – Enjoy

Probably there are some great blogs out there for actually using NC but at least now you can go to your domain and get started.

Have a nice day!

Categories
Nextcloud Technology Tutorial

Fixing My Dead Nextcloudpi box in three simple steps

My nextcloudpi box was kicking back errors and wouldn’t start. It was in a degraded state. I then learned about systemctl --failed command which showed something wrong with certbot. Thankfully I already knew that was related to LetsEncrypt.

I realized that it was trying to renew two domains that weren’t there and were expired and would be unable to renew anyway even if they tried so… not sure why they were there in the first place other than perhaps some testing I did a long time ago? So I wanted to remove them completely to rule them out so here are the instructions for anyone else trying the same thing on Nextcloudpi or another server with ‘random letsencrypt domains’

Part 1 – Removing Random LetsEncrypt Domains and Certificates

Should be as simple as running these commands and replacing yourdomain.com with your domain verbatim.

  1. sudo rm -rf /etc/letsencrypt/archive/yourdomain.com
  2. sudo rm -rf /etc/letsencrypt/live/yourdomain.com
  3. sudo rm -rf /etc/letsencrypt/renewal/yourdomain.com.conf

Part 2 – Disabling the Deleted Site from Apache’s Reach

After doing Part 1 above and removing these random domains that should not have been there I ran sudo systemctl restart certbot.service to restart the certbot service. This seemed to work now. However, when I ran sudo systemctl status it still did not show a green light.

After digging deeper by running a sudo journalctl -xe command the log showed here was a syntax error in /etc/apache2/sites-enabled/000-default.conf. After going to line 43 I realized that it was trying to run one of the domains I had deleted in Part 1 above. Thankfully I already knew how to enable and disable apache sites.

to do that I did:

sudo a2dissite 000-default.conf

Then I did a:

sudo systemctl restart apache2.service

Then I did another check on services:

sudo systemctl status

Nice, the apache stuff now seems fixed but…

Part 3 – Fixing some Wifi-Country.service Error Thing

I still have another error which the system seems not willing to go around! Why? Why me? haha

wifi-country.service loaded failed failed Disable WiFi if country not set

So then I found this blog where i pasted the following to the bottom of the file

country=CA
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
network={
ssid="testing"
psk="testingPassword"

Full disclaimer: I have absolutely no idea what this stuff above does, so yeah – copy and paste at your own risk like I did!

There weren’t any instructions to restart a service which I expected. And, as expect, it was still dead. Thankfully my guess was correct and there should have been a service restart command as follows:

sudo systemctl restart wifi-country.service

That seemed to work…

Then I ran sudo systemctl status and everything looked running. Yay!

Then BOOM! System back. Nextcloudpi started syncing again.

I have no idea why things went bad but it’s a good day when you can fix it indeed! Usually doesn’t go this well for me but I guess I”m learning a few things so let’s keep suffering for growth shall we?

Have a good day!

Categories
Technology Tutorial Ubuntu

Advanced PDF Printing on Ubuntu

I travelled down a road that failed, but I learned some cool things about printing to PDF on Ubuntu that I think are worth logging and sharing for others.

1. Out-of-the-Box PDF Printing (in case you didn’t know)

In case you are new to ubuntu and didn’t know it, basic PDF printing is already installed and working perfectly. All you do is choose ‘print to file’ when you select your printer and PDFs are created. This might be all you need to know from this tutorial.

2. Converting with Ghostscript / ‘convert’ Command

This convert command is powerful. You can convert pretty much whatever image file to PDF. Probably you can convert other files, too. If you don’t believe me how powerful it is, just type man convert and check out the options…

A problem, however, is that it had a significant security issue in the past and so all rights to use the command are disabled by default. If you are running a server probably you should do your own research on the topic but if you are regular user like me you can probably follow the advice I found online to just remove the security policy file / rename it. Here is what I did and it removed all the permission errors I got while trying to run the command:

Move to the directory where the policy is: cd /etc/ImageMagick-6
Move the ‘policy.xml’ file while giving it the a new name (which ultimately just renames it): sudo mv policy.xml policy_disabled.xml

Now you can run the ‘convert’ command and send whatever it is to a pdf:

convert original_image_file.jpg output_pdf_file.pdf

3. Installing a ‘Virtual Printer’ and ‘Printing’ to it graphically

Install the thing: sudo apt install printer-driver-cups-pdf

As soon as it’s done you should be notified that a new ‘printer’ has been installed, just like it was a normal hardware printer. Now you can simply ‘print’ like you normally would to the printer called “PDF”. It will output the files to your home directory in the “PDF” folder.

4. Advanced printing with the command line using the Virtual Printer

Now that you have installed the printer-driver-cups-pdf virtual printer thing above, you now have a ‘fake printer’ in your computer and can now do ‘lpr printing’ in the commmand line. You can learn about what you can do with lpr here but instead what we’re going to do is simply direct the printing to the printer called “PDF”.

So whatever you can do with a regular printer, it seems you can do it now to PDF.

5. Other

There is also this thing which may be of value researching. I didn’t have time but it seems like there may be other value there ps2pdfwr. This is already installed with ubuntu. It takes postscript stuff and makes PDFs. You can type man ps2pdfwr in your terminal to learn more.

Hope this helps.

Categories
Technology Tutorial Ubuntu Ubuntu Touch

RECOVERING FILES USING SCALPEL AND UBUNTU

“Ooops. I deleted my photos”

I used foremost in this blog to try to recover deleted mp4 video files (for my sister) using ubuntu and the software called Foremost, but it wouldn’t find the video files. It found a bunch of photos but not her supposed videos.

Then, I thought maybe the tool wasn’t capable so I tried to figure out: Scalpel which is ‘the other’ major tool besides foremost.

But how to use it?

This video is an absolutely great way to get started doing the basics. Definitely take the time to watch it as it will give you a good birds eye view not just about carving an image file but also how to carve a device (ie. hard drive, usb drive) connected to the machine.

Also, the man scalpel was pretty useful, however, I still couldn’t figure out

a) where the ever-important scalpel.conf file was and
b) how to do mp4 which was (oddly) missing from scalpel.conf

It was pretty surprising that mp4 was missing, that’s for sure…

However, I did find this blog which got me most of the way there in terms of basic learning. Interestingly the author did not make available the following code containing the mp4 header/footer stuff to simply copy and paste. However, I was able to manually type from the screenshot so that myself (and the world) can just copy and paste it. Here it is:

EDIT 200816 – See below the disclaimer before doing this. I was wrong..

mp4 y 30000000:70000000 \x46\x4c\x56\x01\x05\x00\x00\x00\x09\x00\x00\x00\x00\x12\x00\x02\x36

Important Disclaimer: I have no idea if this thing is right, or works, but the program itself, I can verify, ran perfectly and carved for files. No videos were ultimately carved in my situation, but I don’t think that is the fault of this software nor my copy/paste stuff above. It’s because I don’t think there actually were any videos on my drive to start with. But I wanted to be clear that I simply typed out another bloggers stuff and stuck it here.

I was wrong, it seems. I kept getting nothing at all which I thought was weird. Then I just decided to dump the hex stuff above into a hex-to-text editor thing and discovered it translates to “FLV”.. So then what I did was use the same hex editor and wrote ‘mp4’ in there, and got this output:

6d 70 34

Then I updated the scalpel.conf file to look like this:

mp4 y 30000000:70000000 6d 70 34

This started working so I appear on the right path. However files still not playable and came in about 100 small chunks per directory. Need correct scalpel.conf entry to carve an mp4 file… any help appreciated in comments below!

Then I finally found another blog which showed that the scalpel.conf file is located at /etc/scalpel. Now you don’t have to spend an hour to learn that!

The first thing I did once I had what I needed to paste, and knew where to paste it, was edit the scalpel.conf file with the following command, and then copy/paste the above ‘stuff’ at the very bottom of the file and then ctrl+x, etc, to save changes in the file:

sudo nano /etc/scalpel/scalpel.conf

Once that is done, we need to run the command. Even this kind of messed me up with a few of the tutorials and even the man pages I read. Here is the syntax of what you’ll need to start it running with my breakdown of what each component is following:

sudo scalpel -c /etc/scalpel/scalpel.conf -o /path/to/output/diretory/directoryname /path/to/location/of/image.img

Quick explanation of this so you can plug in your details into the command to get carving:

sudo – it requires super user permissions (and your password)
-c – this tag points to the scalpel config file you edited above. If you want, you can move it, but if you didn’t move it this path in my example above is correct
-o – this points to your output path where ou want the recovered files to go
directoryname – at the end of your path add a useful name you’ll remember, as this directory will be created and the recovered files placed inside if it works
/path/to/location/of/image.img – likely you have already cloned the drive and are not using the original (recommended for many reasons) so this is the path to where on your machine this image file is located.
image.img – represents the name of your image, whatever it is.

So once you customize your details and hit ‘go’ you should be able to start carving files. Also, if you want to carve for more than just mp4s you can un-comment the options in the scalpel.conf file, or add others and they will all run at the same time once you start.

I found that Foremost found more .jpg than Scalpel but I might have configured one or both of them incorrectly…

Hope this helps you recover some lost goods!

Categories
Life Skills Technology Tutorial Ubuntu Ubuntu Touch

RECOVERING DATA FROM YOUR ANDROID DEVICE USING FOREMOST AND UBUNTU

“Ooops. I deleted my photos”

Well, I thought this would be a bit easier than it was but such is life – HARD…May this tutorial save many or all of you lots of time. Probably you can do all this with Kali, Debian, or other distributions that have the same commands, but I’m ubuntu so ubuntu it will be.

My goal here was pretty simple. My sister wiped her photos from her phone (or someone she knows…) and they are gone. One would think they would save somewhere in google’s creepy backup network, but seems not so. Perhaps when you press ‘delete’ on your Android device they delete first from your device then from the google servers as they sync? That’s actually good thing for security reasons and privacy – which is why I doubt it works like that (ha). Anyway, I use Ubuntu Touch so I don’t know. But I do know that Nextcloud is awesome and I believe you can set it up for manual backups so it doesn’t happen automatically and you can probably set it up just fine on Android, too. This would be ideal setup so if you accidentally delete stuff like this the deletion won’t make it to the server and you could backup from that…. Anyway, my sister wanted these photos and I like learning so I started to review how to do it from a tutorial I never finished back in 2013. At that time I had wiped all my emails in Thunderbird, and I recall I did successfully get them all back.

The first interesting thing I encountered is that when I plugged the android device in my ubuntu computer – unlike my Ubuntu Touch mobile device – an Android device ‘mounts’ but doesn’t show up as a usual sdb, sdc, sdd, etc drive in File Manager system. The reason for this is that Android apparently uses ‘MTP’ and this was the source of my initial pain. If you are interested, here is a link about my mtp journey, only because I feel I don’t want to waste the time I spent logging it all and perhaps one day it will be useful related info.

Cloning the Drive with ddrescue!

To clone the SD card of the device we are going to use a tool called ddrescue, which annoyingly is called ‘gddrescue’ in the Ubuntu packages, yet, when you run it is ‘ddrescue’, not ‘gddrescue’. I wasted a solid 15 minutes trying to figure that one out…

  1. Install dd rescue on ubuntu: sudo apt install gddrescue
  2. learn how it works and to make sure you have the right tool: man ddrescue (again, not ‘man gddrescue’)(man!)

This blog post was good except that lshw -C disk only shows discs and direct USBs I guess but not SD cards, and since my sd card is in the sd card adaptor directly in my laptop, I’m going to use lsblk which will show the SD cards and their ‘mount points’. If you haven’t seen these before, they typically show up as an entry starting with ‘mmc’.

  1. Identify your SD card mount logical whatever: lsblk
    Mine is ‘mmcblk0’ but yours could be different. If there is a ‘tree’ of partitions, just choose the top level one for the card.

Documentation shows the basic command systax for ddrescue as: ddrescue [options] infile outfile [logfile]

They also had this example:

root# ddrescue -f -n /dev/[baddrive] /root/[imagefilename].img /root/recovery.log

I will change the above example to the following, but you’ll adjust yours accordingly:
sudo ddrescue /dev/mmcblk0 ~/samsungs7.img

What I’m hoping this will do is burn a copy of the sd card to an image file which I will then be able to carve. If you do your research you’ll find that it is always recommended to clone a drive and do forensics work on it rather than do the work directly on the ‘active’ drive. I’ve removed options because I want the most thorough image creation. Disclaimer: I have not studied ddrescue in detail, so always do your own deeper dive if you have time.

Note 1 before you run the command to clone: I quickly found out at this point that my laptop didn’t have enough memory so decided to clone the drive straight to another USB pendrive of same size to keep my laptop free from having a bad situation of running out of memory. However, at the same time I thought I would take my advice and use a drive of the same size of the cloned image. Wrong move. Even though advice online says that you need ‘a drive of the same size or larger’, there must be tiny differences of size between the manufacturers because just as ddrescue was finishing the job of cloning, it stopped and told me the usb drive ran out of memory. I ended up pointing it to my big storage drive. So, my advice is this: output your cloned image to a drive LARGER than the size of the drive that is being cloned. This could save you time, headaches, and confusion.

Note 2 before you begin to clone: be careful that your path for the output image file is a ‘real storage path’ on your machine, and do not accidentally include something such as ‘/dev/sdc’ as you will get the ‘this is not a directory’ warning. If you get this warning simply check the path and adjust accordingly.

The following is my more layman’s example of the command I used successfully showing in SD card input device with the Android data on it (“mmcblk0”), the output image file going to my big storage drive (“HardDriveName”) into a directory on that drive that I have already created (For example a directory called ‘samsung’), and then finally creating a clone of the drive called in that directory (“cloneImageName.img”):
sudo ddrescue /dev/mmcblk0 /media/user/HardDriveName/path/to/place/to/store/clone/cloneImageName.img

Update the above according to your paths, of course, including the mmcblk0 as yours might be different – lsblk to confirm…

When the ddrescue command is running it looked like this for me:

GNU ddrescue 1.22
ipos: 67698 kB, non-trimmed: 0 B, current rate: 43384 kB/s
opos: 67698 kB, non-scraped: 0 B, average rate: 22566 kB/s
non-tried: 15957 MB, bad-sector: 0 B, error rate: 0 B/s
rescued: 67698 kB, bad areas: 0, run time: 2s
pct rescued: 0.42%, read errors: 0, remaining time: 11m
time since last successful read: n/a
Copying non-tried blocks… Pass 1

Carving the files with Foremost!

Good job. We now have the clone .img file and now comes the hopeful recovery stuff. We now will start rippping through the clone to see what we can find.

  1. Get software called foremost: sudo apt install foremost
  2. Learn about it: man foremost

In this case I’ll be trying to carve out picture files. It’s a Samsung and I happen to know all photos are .jpg so i’m going to narrow down this carving to .jpg only, to speed it up.

foremost -i /media/user/HardDriveName/path/to/place/to/store/clone/cloneImageName.img -o /media/user/HardDriveName/path/to/place/to/save/restored/pictures -t jpg -v

You can seen now my input file / target file is the same one we created in the cloning section above:
/media/user/HardDriveName/path/to/place/to/store/clone/cloneImageName.img

The place where we’ll save the carved files is:
/media/user/HardDriveName/path/to/place/to/save/restored/pictures

I stuck -v on the end to make it more obvious (verbose) in the terminal what’s happening. I recommend the same.

When everything is complete, by the way, Foremost creates and saves a file in the carved files directory called ‘audit.txt’. Then, a specific directory is createed called ‘jpg’ which will then house the files. This is nice and I didn’t know this happens before I began so I wasted some time creating directories with custom file-type names. No need. Foremost does this for you.

The process may take a long while so pack a lunch…

Conclusion

These two tools ddrescue (for cloning) and Foremost (for carving) are great and not so hard to use. Foremost successfully found some deleted photos, however, it was unable to find deleted videos. In fact, Foremost didn’t find a single video on the SD card which makes me wonder if it worked or if I did something wrong. I specified the .mp4 extension but nothing. I will continue to look into this, but for now, if you need to recover photos, I can tell you this worked great for me.

In fact, I ended up studying and using a second piece of carving software called scalpel which turned out also to be great, but it seems a little less simple to use. I created an similar instructional blog about scalpel file carving on ubuntu if you’d like to try that one too.

Categories
Technology Ubuntu Ubuntu Touch

Log notes from my MTP android ubuntu learning day

Before you read this. Big disclaimer. This post is written in super crappy style and is nothing more than my personal notes as I tried to figure out MTP. The only reason I’m even putting this post online is because a few of the things I learned below seem like they will be relevant for future ubuntu touch / android porting things. Even skimming over the process I went through seems useful. However, if you are looking for a good tutorial about how to actually do something with MTP / android / ubuntu, this is not that. Hopefully it will provide some value to someone though.

With that out of the way, here comes the crap:

I went down a long road installing ‘MTP://’ mount thing with sudo apt install mtp-tools (very few tutorials on this one) but after it was all said and done, I don’t think it was required because it seemed that all the entries for the ‘udev/rules.d’ files were already complete.

I had assumed you need to plug in the device and clone the internal hard drive (EMMC) via USB, but after some chats with some software communities it seems that ‘it doesn’t work like that’ and the situation is as follows (feel free to comment below otherwise to help me and other readers):

  • Everything nowadays is related to the SD card
  • You have to clone the SD card
  • You have to do forensics / file carving on the cloned drive image

I still feel there has to be a way to carve stuff from the EMMC via MTP, but that perhaps is for another tutorial blog. If you want to look into this, I will provide my learning here, but if you have an Android device with the userdata on the SD card (this is probably the case), then skip on to the following section:

  • How to ‘mount’ an Android device? With MTP
  • What is MTP? It’s this
  • If you are on ubuntu and try lsusb and can see something like this, you are probably already set up well with MTP:
    “Samsung Electronics Co., Ltd Galaxy (MTP)”

Un-mounting android.
un-plugging android.
lsusb
good. I now get “Samsung Electronics Co., Ltd Galaxy (MTP)”

Now to figure out how to find that drive… how does one mount via mtp-tools?
It seems like most stuff is from circa 2014 which is possibly old and scary at this point.
man mtp-tools

nice. tools… manly tools.

wow. bad tool page. looks like MTP tools don’t really ‘mount’ the drive as a drive but just help you communicate with the drive. Not what we’re trying to do here. Jumping up a level.

This guy seems to have figured it out in this video I watched it, felt scared, then read the following code in his notes and this seemed to make the fear go away. Now, we already have mtp-tools installed from above so I’m going to assume that we don’t need to do all of it however, commments on YT from a week ago show ‘thanks man, it worked’ so… yeah. Maybe I’ll just follow it to a tee:

sudo apt-get install libmtp-common mtp-tools libmtp-dev libmtp-runtime libmtp9

sudo apt-get dist-upgrade

sudo nano /etc/fuse.conf

lsusb

sudo nano /lib/udev/rules.d/69-mtp.rules

# Device
ATTR{idVendor}=="XXXX", ATTR{idProduct}=="YYYY", SYMLINK+="libmtp-%k", ENV{ID_MTP_DEVICE}="1", ENV{ID_MEDIA_PLAYER}="1"

sudo nano /etc/udev/rules.d/51-android.rules

ATTR{idVendor}==”XXXX”, ATTR{idProduct}==”YYYY”, MODE=”0666″

sudo service udev restart

sudo reboot

I’m going to walk through these commands one at a time and freshen them up if they need freshining.

sudo apt install libmtp-common <– not needed. shows current
sudo apt install libmtp-dev <– yes, required, not currently installed
sudo apt install libmtp-runtime <– no, not required, showing current
sudo apt install libmtp9 <– no, not required, shows current.

Therefore, the first line could be:

sudo apt install libmtp-dev

I’m a little concerned about running sudo apt dist-upgrade because I think that upgrades your entire current distro from, say 18.04 to 18.10… that seems a bit intense so I’m going to try to skip that one. Instead I did:

sudo apt update and sudo apt upgrade

Since we already have mtp-tools installed and we know that lsusb is showing the right stuff “Bus 002 Device 036: ID 04e8:6860 Samsung Electronics Co., Ltd Galaxy (MTP)” i think we can move on to adding the rules

blast. it looks like this one rule location might be out of date sudo nano /etc/udev/rules.d/51-android.rules

cd /etc/udev/rules.d/ to search for it…

not there… maybe needs reboot. Rebooting….

sudo nano /etc/fuse.conf
remove comment (#) ‘user_allow_other’
control x to save changes

sudo nano /lib/udev/rules.d/69-libmtp.rules <– note, this is different from tutorial, couldn’t find his file name and assumed this is updated name
1m15s video is useful to watch as he pastes

connect phone
do an lsusb in one terminal
open new terminal window so you can see both together

take this, and update the XXXX to your device things as listed in lsusb in the other window

# Device
ATTR{idVendor}=="XXXX", ATTR{idProduct}=="YYYY", SYMLINK+="libmtp-%k", ENV{ID_MTP_DEVICE}="1", ENV{ID_MEDIA_PLAYER}="1"

mine will become, for example AS DEMONSTRACTED at 1m44s in video with blue and red arrows (great visual, by the way!)
idVendor is the first 4 characters before the colon
idProduct are the second 4 characters after the colon.

# SAMSUNG GALAXY 7
ATTR{idVendor}=="04e8", ATTR{idProduct}=="6860", SYMLINK+="libmtp-%k", ENV{ID_MTP_DEVICE}="1", ENV{ID_MEDIA_PLAYER}="1"

I like to do stuff in text editors and make sure it’s good before pasting into terminals but up to you.

Oh CRAZY. it waslready in this sudo nano /lib/udev/rules.d/69-libmtp.rules

now what… hmm

well I guess we can say that was ‘step 1 to make sure that your device is recognized by MTP’. Now that is good we must do next stuff

sudo nano /etc/udev/rules.d/51-android.rules

not there. but foudn it here:

sudo /lib/udev/rules.d

so:

`/lib/udev/rules.d/51-

well that was all a waste of time, kind of. Turns out that the SD card is the mount point of the EMMC memory anyways (apparently) so I discovered there was an SD card in device and may work….

Categories
Business Life Skills

GOOD BUSINESS COMMUNICATIONS IN OUR FRACTURED AGE

Proper communication is paramount. In the same way that handing someone a cheap, faded, crappy business card paints the wrong picture about your company and brand, a poorly communciated message does the same about yourself as a person.

We live in a time when it often feels there are more communication options than people with whom we communicate. Further, those people (us) are getting bombarded with endless streams of communications, 24-7-365. So how should we communicate with them, and when we are using these mediums, what are the communication expectations?

I will begin first by mapping out the major categories of communication media and then break each one out into more detail.

Major Communication Media

  1. Chatting / Texting: Includes SMS (cellular phone text messaging), Instant Messengers (IM) such as Telegram, WhatsApp, Matrix, or – more precisely – any real-time live stream of text-based communication.
  2. E-mail: E-mail includes e-mail. It’s awesome that e-mail requires no explanation after 40 years…
  3. Voice (phone) call: Includes online Voice-Over-Internet-Protocol (VOIP) conversations (such as skype), video conversations, cellular voice phone call, ‘FaceTime’ and other such human-to-human, voice-to-voice communication.

Chatting / Text

History
Typically speaking and historically speaking, such a chat medium would never be used professionally. On the old T9 keyboards on a cell phone, you would have to push the single number button three times to find the letter ‘c’ while composing. Messages would typically be ‘See you soon’, or “I’m 5 min late”. That’s it. You would never consider this kind of medium to be acceptable in a business environment. Nor would you even give someone related to your work or business your personal cell phone, unless you were an occupation such as a REALTOR who would require a weekend phone call. You also would never have your cell phone on or audible in a public environment (gone are those days, sadly), so it was not a very reliable way to communciate, even if it was permissible.

Then came the ‘instant messages’ which started to blur the line between voice, email and texting. The same kind of skills were used but it was faster and ‘always on’. Traditionally, though, this was for either internal stuff at a company, or between friends and family. No professional would use such a medium in the context of, say, a sales conversation or even a REALTOR communicating with his/her client. There is also a high risk associated with texting / chatting because once delivered, it’s a written record and oftentimes we send these messages while angry, or – some people – even while drunk. Much less common does one send an e-mail in this way, simply because it’s a touch less convenient.

Expectations in Business
For personal communications ‘do what you want’ but in business, I would recommend limiting this form of communication to zero for any external communications related to traditional selling, communications with vendors or professionals. The context of a live-chat text chat on a website or tech-support live chat is fully acceptable because the content is always the same and 100% focused on the task at hand. Very rare would you talk about sports with the sales rep on a website, but, if this happens, staff should be instructed not to engage and politely bring the focus back to the task at hand. The risk is too high, the record is permanent. Above all, cellular sms / text-messaging should always be avoided, 100% of the time whether personal or professional because the technology is so insecure and open that you might as well post it on the lamp post. Never use sms/texting if you can avoid it, and if you use it, never ever put personal or sensitive info in the message.

For internal communications, however, texting/chatting can be an excellent communication medium and should not only be accepted but also embraced. The ‘ping’ features (ie. using the @ before the username) of most chat apps are very useful way to reach out to someone for faster and more pointed communications. Services like Slack, Rocketchat, etc, can dramatically improve collaboration as well.

Context / Function

  • Typically used between friends and family.
  • Not recommended for external business communications. For instant stuff, when a quick response is needed.
  • Useful when a real-time on-the-fly discussion is required and can’t wait out the wait of an e-mail, or the coordination of a voice call.

Pros

  • Fast
  • Limited thinking required for grammar and style
  • Good for quick surveys questions
  • Convenient.

Cons

  • Easy to lose
  • Easy to forget
  • Not very permanent.
  • Easy to make career-limiting and life-reducing errors in the heat of the moment…

Email

History
Historically, the name ‘e-mail’ means exactly what you’d guess: a hand-written piece of paper mail, but sent electronically. As a result of being one of the first electronic written forms of communications, long-standing expectations exist whether you know about them or not. Although the use of PGP technology can secure the email very well, most people do not use this end-to-end encryption, and as such, the e-mail should not be considered secure or private in any way. Further, if somone decides to try to sue you, e-mail can and will be used against you in a court of law.

Expectations in Business
E-mail format is more formal, and more professional. Composition time is longer and requires more thought process (a good thing), and more thoughtful responses.

The expectation of the speed of response of someone on the other side is, conveniently, also longer. This is excellent for a busy person because one can organize their time better with e-mail. The level of grammar is higher, the use of ‘lol’ and other ‘chat-based-stuff’ is avoided. However, the use of an occasional text-based smiley such as 🙂 would nowadays be considered acceptable if used tastefully and after the relationship with the person on the other side is somewhat matured.

Although it depends on your business, it is often acceptable to respond within one business day. Your competitors will probably respond within the same day or sometimes the next day. To assure you are one of the fastest responders in your industry, I would recommend ‘less than 3 hours’ if you can, wherever possible. Responding too quickly could present the appearance that you are not busy and may have an adverse affect so consider this too.

If an email is written to you, someone has essentially declared, “I’m spending my time to communicate with you”. As such, whether they say it or not, they will have an expectation of a response to show respect of their time. It is very easy to respond and we can break e-mails into the following two general categories: A) an email where an action is required or B) an email where no action is required. Let’s explore both of those in a bit of detail.

A – If an action is required you should, as quickly as you can:

  • Reply to the e-mail with the action item completed, if you can do the action right away, or,
  • Reply to the e-mail with an estimated time of completion of the action item, if you can do the action, but cannot do it immediately, or,
  • Reply to the e-mail immediately explaining the situation / explanation, if you cannot do the action item at all.

By replying to the e-mail in all the three cases above, you are acknowleding the person’s time, and helping them plan their situation accordingly. Otherwise, they are left in limbo. Silence, in the world of e-mail, can be more frustrating than a negative response or rejection.

B – If an action is not required, you should, as a general rule and as quickly as possible:

  • Read the email, and respond to the email with a polite confirmation of receipt if you are able to properly read it, or
  • Respond with a confirmation of receipt and let them know by when you will read in detail and respond, if you are not able to read it properly for a fairly long period of time.

In this case, not much is required of you but the confirmation of receipt will really help present you as a good communicator and respectful of others’ time.

It is wise to simplify it as follows: an e-mail requires a response every time, unless the email is absolutely informational only (ie. mass newsletter to a group) and there is no chance the person would expect a reply. If you aren’t sure, send at least a confirmation of receipt, such as ‘Received, thanks’, or ‘Thanks!’.

While we are on the topic of group e-mails, do not use the reply-all feature, unless it is absolutely required and you want to announce to the whole group that you have read it. This is typically considered super annoying when people do this. The reply feature is sufficient to confirm receipt, unless there is a reason to announce it to the group.

With e-mail, be very careful with humour. Due to lack of skills in the communication language, ethnic differences, cultural differences, etc, humour needs to be handled very carefully, until a solid relationship is established. Avoid the usual politics and religion until the relationship is established. At that point, it could be a very enriching experience to be ‘more real’ with all these great folks and use the God-given gifts you have with boldness.

E-mail requires a higher level of language proficiency and professionalism – if you’re going to send a poorly-written email with spelling errors, missing capitals, and typos to a customer or even a vendor, it would be worth considering not sending it at all. The good news, however, is that it’s super easy to write a proper and professional e-mail. This simple how-to blog is an excellent starting point. If you need to write it less formal, just tone down the salutation and add a bit more personal fluff. I would highly recommend reading this blog and consider implementing the concepts of ‘keywords in subject lines’ and ‘bottom line up front’. I firmly believe if everyone did this (at least internally) in business, we’d all have a much better life.

Context / Function
E-mail is ideal for business and recommended as the default communication for anything serious in writing. E-mails are regularly used in court cases and should be treated in the way a hand written letter would have been written back in the day. When an e-mail is sent, business is being done. Typically a professional dialogue would never take place by a chat/text environment as our friend has been doing, but instead by e-mail.

Pros

  • Long history of solid functionality. It ‘just works’.
  • Very ‘official’

Cons

  • Requires more time and skill to compose.
  • Has a higher chance of mis-interpretation than a voice call or a text chat because of its context of being a more serious communication.
  • Slower response time.
  • Less convenient.
  • Rarely secured by encryption (although possible).

Voice Calls

For this section we will assume the traditional ‘ring ring telephone call’ even though an internet video / voice call could do the same thing.

History
The first ‘instant voice communication’ and a reliable and proven and effective medium.

Function / Context
This is the most old-school communication tool in the world, other than snail mail and is very personal as you can hear the person’s voice or even see their face with video. The communication is instant (phone) and effective. Regardless what the other kids say, more can be accomplished in a 5 minute phone call than a 30 minute meeting (and it’s more enjoyable).

Expectations in Business
A phone call interrupts someone’s task and someone’s day. Let me say that again because it seems a lot of people don’t get this simple fact: A phone call interrupts someone. When you make a phone call, it better be important or valuable to the person you are calling, or, you better have a strong tone of humility and politeness if not. For sales people, this is paramount because if you call someone and try to sell something, even if they need it, and you are interrupting their crisis situation, they will hate you and your product (and your dog and goldfish while they’re at it). Always ask, “Is it ok to have a quick chat right now?” This one sentence will save you much pain.

A phone call is technically more private. Although audio can be recorded and is being recorded, typically speaking it’s much more difficult for a voice call to become part of a court record. If you need to communicate something sensitive or ‘on the side’, definitely this medium is the best. It is also one of the most clear forms of communication because you can hear more of the person’s emotions and nuances in their voice. In the other text-based communications it’s very (very, very) easy to communicate something unintended, by just one word. Countless are the stories of fights that have started over one misplaced or out-of-context word. Always use the voice call if there is risk of this.

Pros

  • Long, proven, and reliable network with a really wide reach. “It just works”
  • Fast
  • Personal
  • Reasonably secure and private
  • Better for communicating ‘nuance of the language’ that written format
  • Highly prioritized

Cons

  • Very interruptive
  • Pretty annoying
  • Getting less and less acceptable to younger generations

Conclusion

Nothing is more important in business than your communications and your communication style. You could have the nicest business card in the world but it will end up in the garbage can if you fail on the items above.

Of course, we must conclude the topic with the word ‘grace’. We all fail. I’m by far the worst. I like humour and humour in communications is risky business and I’ve offended not a few folks already as a result.

Categories
Freedom and Privacy Tutorial Ubuntu Touch

How to set up Gitlab with Ubuntu Touch 2FA Manager App

I had been using Ubuntu Touch Authenticator app, but I wanted to try out the 2FA Manager app to see how that worked for me. The only problem was that I had never reset my 2FA stuff that I originally set up so this was my first time switching devices. It was easy, but it was also not documented. I hope this simple guide helps and perhaps the steps of this process also apply to other 2FA sites:

In your Gitlab account

  1. Log into your gitlab account
  2. Go to “Settings” under your top right avatar
  3. Go to “Account”
  4. Click the ‘Manage Two Factor Authentication” button
  5. Click the ‘Disable two-factor authentication’ button which will then warn give you this warning “Are you sure? This will invalidate your registered applications and U2F devices.”
  6. Select ‘Ok’ on warning screen

This will now present you with a screen and QR code where you will scan it and bring it into the Ubuntu Touch 2FA Manager app. So now let’s move to your ubuntu touch device…

On your Ubuntu Touch device

  1. Download / install tagger app if you have not already done so
  2. Download install the 2FA Manager app if you have not already done so
  3. Open the 2FA Manager App
  4. Open Tagger app and point camera at QR code and patiently wait… adjust camera position if needed until it registers the QR code. You will know this is working when the dialogue with “open url, copy to clip, generate qr code” buttons pops up
  5. Select ‘open URL’
  6. Add a description. This is just a friendly name that will help you remember what this key is for as you may build up many keys over the years.
  7. Press ‘Add’ button. You should now be taken to the 2FA Manager app to a screen with a 6 digit bold code and a purple progress meter that is moving. You will notice that each time the progress meter reaches the end, it will loop back with a new 6 digit number. You will also notice that if you tap this 6 digit code it will open it up and make it much bigger and easier to read and also give you the ability to copy this code to clipboard. Whichever way you prefer, these are the two ways you can obtain your 2FA codes and these codes are what you will use to log into Gitlab each time, and also to use to finish up your setup in your Gitlab account.
  8. Go back to Gitlab

Meanwhile, back at Gitlab…

  1. Look at your 2FA Manager app and grab the 6 digit code presented on the screen. But hurry! (dramatic music here) If you don’t grab it quick enough the next one will be generated. This is for security if you were wondering and limits the chance of someone stealing your stuff…
  2. Enter the 6 digit code into the ‘pin code’ field on your Gitlab 2FA setup page you should still be on
  3. Select ‘Register with two-factor app’ button
  4. Save your recovery codes: If all goes well you should now be presented with a ‘congratulations’ screen and a list of 10 random looking letters and number strings with bullet points. These are your recovery keys so save them somewhere safe. I recommend the keypassxc app as it’s a really great password manager for your laptop / desktop ubuntu environment.
  5. After you have these recovery codes safely stored, press ‘proceed’. If you got an ‘invalid code’ message, it probably means you weren’t fast enough with steps 1-3 above so repeat but faster 😉

You should now be brought back to your gitlab Account page and good to go. You can probably use this workflow above to do 2FA setups in other systems as well.

Categories
Business PrestaShop Technology Tutorial Ubuntu

Setting Up Prestashop on Digital Ocean Ubuntu Server

The background to this post is that I had a sudden need to ‘figure out shopping carts’. I understood from times past it probably isn’t wise to have a shopping cart that intends to scale on a shared cpanel host, even though it’s pretty dead easy to set that up. I decided this was the day I had to learn how to set up an Ubuntu server myself and face it until I knew enough to do so.

I started with Digital Ocean because I recall testing a ‘droplet’ and being surprised how easy and nice it was. They also support UBports and their servers so I appreciate that. I got a promotional code to try it out and glad I did. I think it’s a really nice solution.

The real purpose of this tutorial is to make sure I don’t forget all the things I learned and also to leave some bread crumbs for others who might want to do the same thing.

I landed on prestashop because it’s very open software, very flexible, been around for many years, and has a bunch of modules that should be quite helpful to roll out store after store.

The idea of this server is that it will run the ‘main business website’ as well as the shopping cart.

Any time you see ‘123.123.123.123’ this is your IP address for your server. Replace it with your actual one, of course.

You should also have a nice password manager set up to save yourself much time and pain as you learn. I recommend keepasxc

Let’s get started

Big Step 1 – Set up the Digital Ocean Droplet

  1. Initiate your server / droplet
    I won’t give detailed instructions here because there is lots of documentation out there. One thing I will say is that i used an Ubuntu 18.04 LTS server with 2GB of ram to make sure I had enough ram to do what’s needed for both the main site and prestashop.
  2. Immediately set up your A records to point to your domain.
    In Digital Ocean (“DO”) it’s pretty easy but I don’t have any links to it. Just go to ‘networking’, add your domain, then create your domains, subdomains, etc, and then point the A record to the Server you just initialed in step 1 above. This step is important to do right away because propagation of the name server to the IP address / box takes an hour or more sometimes depending on where you are.
  3. Set up your SSH keys
    Before you install the droplet, make sure your SSH keys are set up properly because you are going to use them a lot in the terminal. Here is some ssh tutorial. Note that it’s a little hard for me to find SSH stuff in the DO backend so for the records it’s at settings / security and then a tab (I think…)
  4. SSH into your new server
    ssh root@123.123.123.123
  5. Set up a non-root user account on the server
    This is a nice detailed tutorial on this if you’d like here.

Otherwise, this summaries it:

  • Log into the server as root: ssh root@123.123.123.123
  • Make sysadmin user: adduser sysadmin and follow the prompts (should be password prompt and then a few questions)
  • Add new user to the sudo list: sudo usermod -aG sudo sysadmin
  • Switch to new sysadmin user: su sysadmin
  • (Optional) Test the new-found powers by running this and then cancelling: sudo dpkg-reconfigure tzdata

Helpful note
If you find you are not getting prompted for the questions after running the adduser command, it might be because you accidentally ran useradd command like me… weird how it works both ways…

  1. Update the server’s packages
    sudo apt update
    sudo apt upgrade

Big Step 2 – Install Apache

  1. sudo apt install apache2
  2. Check to see if this triggered new packages to upgrade: sudo apt update
  3. If packages need updating: sudo apt ugrade

Big Step 3 – Install PHP

There were about a million different tutorials out there that worked but were pretty hard. At the end of all that I found this which seems to work just fine and dandy, so hopefully it works for you:

  1. Add PPA: sudo add-apt-repository ppa:ondrej/php
  2. Update packages: sudo apt update
  3. Install php (adjusting version and number as you over time and making sure the version is correct for Prestashop / other software): sudo apt install php7.3
  4. Test php version: `php -v’
  5. Add the modules required for Prestashop (adjust for whatever else you are installing): sudo apt install libapache2-mod-php7.3 php7.3 php7.3-common php7.3-curl php7.3-gd php7.3-imagick php7.3-mbstring php7.3-mysql php7.0-json php7.3-xsl php7.3-intl php7.3-zip
  6. Open this file (adjust PHP version as needed / as appropriate): sudo nano /etc/php/7.3/apache2/php.ini
  7. Add these lines of code. Truthfully I don’t know where you are supposed to add them so I added them near the bottom in the first space from the bottom I could find, but not at the very bottom. Adjust as needed:
file_uploads = On
allow_url_fopen = On
short_open_tag = On
memory_limit = 256M
upload_max_filesize = 100M
max_execution_time = 360
date.timezone = America/Chicago
  1. Restart apache2 after doing php changes (now) with this sudo systemctl restart apache2.service (I’m not 100% sure this step is required, but never hurts!)
  2. Test that PHP is working by:
  • Creating this test file: sudo nano /var/www/html/phpinfo.php
  • Adding this info to the file and saving: <?php phpinfo( ); ?>
  • Going to the URL where this file is where you should see a boring PHP info page: 123.123.123.123/phpinfo.php
  • Deleting the file if the test is good: sudo rm /var/www/html/phpinfo.php

Big Step 4 – Install Mariadb

Apparently mariadb is like mysql but getting more love as we move into the future. I’m just following along.

  1. Create a database name and save it to your password manager. I’m going to use ‘presta_db’ for this. You can call it what you’d like.
  2. Create a database user name and a password and do the same in the password manager.
  3. Install mariadb: sudo apt-get install mariadb-server mariadb-client
  4. Do a security upgrade on it and make a password, etc for database: sudo mysql_secure_installation. Follow the prompts with very likely these answers: Enter, Y, password, repeat password, Y, Y, Y, Y

Big Step 5 – Create your Database in Mariadb

Assuming you did the steps above in step 4, you should now be able to log into your database area with your new password you created. I don’t know if you HAVE to use CAPS for the maria commands, but I do just for visual aid, which is probably why we see this around town…

  1. Log into mariadb: sudo mysql -u root -p
  2. Create your database with the database name you saved to your password manager above: CREATE DATABASE presta_db;
  3. Create the user and password for the database replacing username and password in the following code with yours between the apostrophes: CREATE USER 'username_here'@'localhost' IDENTIFIED BY 'new_password_here';
  4. Do whatever this does, lol: GRANT ALL ON presta_db.* TO 'username_here'@'localhost' IDENTIFIED BY 'same_password_here' WITH GRANT OPTION;
  5. Flush stuff: FLUSH PRIVILEGES;
  6. Exit stuff: EXIT;

Big Step 6 – Create the Directory Structure for a Shared Server Environment (Optional)

First a good resource for hosting multiple domains on a single ubuntu server.

I will provide the directory structure I used for both the ‘standalone’ setup and the ‘shared’ setup (multiple domains on one machine). You can choose and adjust how you like.

  1. Make the full path to the place where you plan to put Prestashop on a shared server environment:
  • sudo mkdir -p /var/www/shop.domain.com/public_html
  • repeat this command for each domain, changing the shop.domain.com/ part to whatever is appropriate

On a standalone environment: sudo mkdir -p /var/www/html/domain.com

Bit Step 7 – Set up your Virtual Host Files in Apache for Each Domain (Shared server environment)

This blog really says it all resource, except, you don’t have to mess around with that /etc/hosts stuff that he talks about at the end. I did that and messed things up on DO environment… The key with this section here is to make sure (in whatever way you are most confortable) that you have a ‘domain.com.conf’ file sitting in the /etc/apache2/sites-available/ directory with the correct domain information in that file and saved. If you don’t, you’ll have pain. The nice part is this wasn’t as scary as I first thought it was. The key items you’ll need are probably these for a basic setup, and this worked fine for Prestashop for me:

ServerName domain.com
ServerAlias www.domain.com
ServerAdmin yourname@youremail.com
DocumentRoot /var/www/domain.com/public_html/prestashop

Lets Encrypt (awesome free SSL cert system) will do ‘the rest’ for you in an upcoming step to deal with port 443 / https:// if you were wondering…

Here is a summary of what I do showing two domains on a shared environment getting their own conf files which we can then edit:

Copy default template for domain 1: sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/shop.domain1.com.conf

Copy default template for domain 2: sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/domain2.com.conf

sudo nano edit each of the above configs with:

  1. Uncomment ServerName line and add correct one:
  2. Add ServerAlias for the www entry below it (not showing in default for some reason, so type it in ServerAlias www.domain.com )
  3. Update admin email
  4. Update document root to /var/www/shop.dailynow.ca/public_html
  5. Repeat for conf files for any other domains on server
  6. Reload Apache just because: systemctl reload apache2.service
  7. Disable the default conf file: sudo a2dissite 000-default.conf
  8. Enable domain 1: sudo a2ensite shop.domain1.com.conf
  9. Enable domain 2 (if applicable): sudo a2ensite domain2.com.conf
  10. Enable re-write mode (which drops some code into your conf file above): sudo a2enmod rewrite
  11. Restart Apache again for fun: sudo systemctl restart apache2.service

Big Step 8 – Putting Prestashop on the server

  1. Install the unzip tool for dealing with the Prestashop zip file: sudo apt install unzip
  2. Navigate to a safe place for doing stuff: cd /tmp/
  3. Verify that you are updating the link in the following code to the latest version. You can verify by going to thei releases page, clicking the latest, right clicking on the zip file and selecting copy link location and pasting into url in code here:
  4. Download the goods! wget -c https://github.com/PrestaShop/PrestaShop/releases/download/1.7.6.7/prestashop_1.7.6.7.zip
  5. Create a prestashop empty directory and unzip the file you just downloaded in step 4 into it: unzip prestashop_1.7.6.7.zip -d prestashop/
  6. move up a level to be able to perform next ‘move’ cd ..
  7. Move the whole directory you just made to the appropriate directory you created in ‘Big Step 6″ above. Adjust as required to your situation: sudo mv prestashop /var/www/shop.domain.com/public_html/
  8. Assign the correct ownership to the prestashop directory (adjust path as fitting for your situation): sudo chown -R www-data:www-data /var/www/shop.domain.com/public_html/prestashop/
  9. Assign the correct permissions to the same directory: sudo chmod -R 755 /var/www/shop.domain.com/public_html/prestashop/

Big Step 9 – Lets Encrypt

I feel, although I’m not sure, that this is the right place for this section: after everything is setup and just before installing Prestashop. When you start the Prestashop installation process it passes sensitive data around like passwords so I feel it should be set up here. However, I’m going to write a warning here, and then repeat it later: you must enable SSL in Prestashop itself in the admin area or things are going to go really bad. SUPER Special thanks to whoever you are author of this blog.

At this point, too, you’ll need to make sure that your domain has propagated (which is why I put the nameserver stuff at the very top). If it hasn’t propagated, Letsencrypt will fail it’s process on the last part. Here is how you can check to make sure you’re ready to continue:

  1. From inside your server, check your current ip address with ifconfig
  2. In a new terminal window (not the server itself but your own PC in a different network) ping your domains individually: ping domain1.com ping domain2.com
  3. Make sure the bounce ip address from ping matches the ifconfig ip address of your server / droplet in step 1 above.

If you get a long hang or failed ping, probably your domain hasn’t propagated yet. You’ll have to wait as I don’t know of another way to speed things up, except I believe you can reduce the TTL rate from 3600 down but that may have adverse affects on your server resources. Study this topic yourself if you run into it.

Bonus Tip
You can find out a bunch of cool stuff about your domains and propagation with DIG command dig domainname.com .
This command seems to be more powerful than propagation and can at least tell you if things ‘should’ propagate correctly before it actually propagates.
Here are two cool ones:

  • Find out the name server associated with your domain: dig domain.com NS
  • Check out that an A record is setup and working: dig domain.com A

Install LetsEncrypt

  1. Add the PPA: sudo add-apt-repository ppa:certbot/certbot
  2. Install the certbot: sudo apt install python-certbot-apache
  3. Test the syntax of, what I believe, are the conf files and apache conf files: sudo apache2ctl configtest. If you get an ‘OK’ message, continue. If you change anything in your conf files, don’t forget to reload Apache after sudo systemctl reload apache2
  4. Enable the firewall: sudo ufw enable
  5. MISSION CRITICAL Right here and right now, be sure you allow the ssh rule in the firewall rules, otherwise, you’ll get locked out of your server like I did. Thankfully with DO if you do get locked out you can go in through their virtual console, but still…So, simply run this line of code to open up SSH in the firewall: sudo ufw allow ssh
  6. Let https traffic through so that Letsencrypt can do its thing: sudo ufw allow 'Apache Full'
  7. Check the firewall status which should show active, as well as show BOTH 22/tcp and Apache Full enabled for both ipv4 and ipv6. Don’t continue until these are both present: sudo ufw status
  8. RUN IT sudo certbot --apache -d your_domain -d www.your_domain
  9. Most questions are easy to answer. Choose option 2 for normal and higher security forcing redirects to https. Note! It seems after you choose this it’s forever impossible to connect with regular http:// on port 80. This may be important? Not sure…didn’t affect prestashop.
  10. Probably you should test your domains here to make sure https:// is working. I didn’t write any steps for this as I figured Prestashop will be a good test coming soon…
  11. Repeat steps 8-10 for other domains (ie. domain2.com) if required adjusting as appropriate
  12. When done all your domains, do a dry run renewal to make sure renewal process is working well too: sudo certbot renew --dry-run

Bonus Section! How to get rid of Letsencrypt Stuff When everything goes bad…

Sometimes we make mistakes and things go wrong with SSL certificates. If you want to rule out Letsencrypt and your certs from the equation, it’s not entirely simple to get it out of your system, as I thought it was. So, here is your bonus section in case you need it nearby:

  1. Disable the letsencrypt conf file: sudo a2dissite prestashop-le-ssl.conf
  2. Totally uninstall letsencrypt (‘certbot’) sudo apt remove --purge python-certbot-apache
  3. Purge these dirs of their certs:

rm -rf /etc/letsencrypt
rm -rf /var/log/letsencrypt
rm -rf /var/lib/letsencrypt
rm -rf /path/to/your/git/clone/directory (seemed not applicable)
rm -rf ~/.local/share/letsencrypt (seemed not applicable)

  1. Wipe all the ssl conf files (which stuck around after wiping the above) in /etc/apache2/sites-available/ with something like:
    sudo rm shop.domain.com-le-ssl.conf
    and sudo rm shop.domain2.com-le-ssl.conf
  2. Restart apache: sudo systemctl restart apache2.service
  3. Restart installation process again if applicable sudo apt install python-certbot-apache

Big Step 10 – Installing Prestashop Via Browser

Go to your var/www/domain.com/public_html/prestashop directory and you should now see Prestashop magically start to do stuff if everything above went well.

You will walk through the install steps and connect the database to the mariadb database name, database user, and database password you created in Big Step 5 above. You will also create a shop admin user and password in the process. When the process is finished, it will warn you to delete your install directory. You cannot log in to your admin area without first deleting this, or, renaming this directory.

To delete it:
Change to where directory is: cd /var/www/shop.domain.com/public_html/prestashop
Remove install directory and all its contents: sudo rm -r install/

To rename it:
Change to where directory is: cd /var/www/shop.domain.com/public_html/prestashop
Move it (which actually renames it), giving it whatever name you’d like: sudo mv install/ install_123abc

Now go to your Prestashop admin area which will be something like domain.com/admin128r78575 (some random characters after the word admin). By the way, you can always find this by navigating to the prestashop directory and just looking for it. This is a nice tip if things go bad 😉

Log in with your shop admin user / password

STOP! Enable and Force SSL HTTPS:// NOW! Do not wait, wonder, etc. If you don’t enable this, things will be bad because your letsencrypt SSL certs are set up and forcing HTTPS:// traffic meaning you’ll never be able to see your shop from the outside without this step. Here is the link again to the post

Note If you happen to get this error while installing Prestashop, you can safely continue as it reportedly does not mess anything major up:

PrestaShop 1.7.5.0 install reports that:
To get the latest internationalization data upgrade the ICU system package and the intl PHP extension”

This is a post I found which indicated this.

Oh, you probably want to rename your Presta-generated admin login directory because it probably made a weird one like admin37dumj777 which means you’d have to log in to your back end at domain.com/admin37dumj777. I did this with this command: sudo mv /admin37dum777 newadmindirectoryname/ which thus ‘moves’ it, or more accurately, renames it and doesn’t move it at all since we didn’t actually change its location. Nice trick, eh?? Now you can log in at domain.com/newadmindirectoryname/

Concluding Comments

I’m tired. That was hard. I hope it helped at least one person besides me. Have a nice day!

Categories
Tutorial Ubuntu Touch

Flashing Ubuntu Touch to your Pinephone’s EMMC memory

You might have already read my first round of ‘flashing 101’ when I successfully flashed Ubuntu Touch to and SD card making it possible to boot the Pinephone that way with Ubuntu Touch. That first blog is a bit more messy because I was learning so in fact, this post here might also help you with flashing to an SD card as it should be more clearly written. If I had more time I’d clean them both up but I don’t so yeah…

The next step in my learning is to figure out how to flash / copy ubuntu touch onto the pinephone’s EMMC (built in) memory. This should make it run faster. There wasn’t a step-by-step that I could find and so I guess I have to write it. Nice folks in the pinephone telegram group said that I would follow a similar method as flashing to the SD card (see my detailed step-by-step on that here so that hopefully is true. Let’s try…

1. Plug in your Pinephone

Unlike my other tutorial, you’ll need the device plugged into your production machine from where the image will be coming.

2. Get latest image

Choose the latest Pinephone image from this page. I choose stable usually but as of today ‘stable’ is a figment of our imagination 🙂 Save it somewhere memorable and probably smart to put it in a dedicated directory so you can run commands more simply and safely. I just realized there is a bmap file sitting with this image download on this page. This could be useful and relevant soon as you read on…

3. Get setup with BMAP

You can just follow my detailed blog if you’d like. If you don’t know what it is, then you probably want to do this.

4. Confirm the source of your ubuntu-touch-pinephone.img.xz file

The file you downloaded in step 2 above which you have on your computer. In my case it’s going to flash from my download directory in my dedicated ‘pinephone_flashing’ directory so mine looks like this:

~/Downloads/pinephone_flashing/ubuntu-touch-pinephone.img.xz

5. Put Pinephone into ‘Yumi Mode’

I didn’t know what else to call this so I named it ‘Yumi Mode’ since that’s the name of the UBports Robot Mascot… I tried to simply plug in the pinephone and do this work but it won’t recognize anything. Thankfully I had read some other tutorial about how to get into this mode. Here’s how

a) Assure device is powered off (you can push / hold power button for about 5 seconds)
b) Press / hold power and volume up buttons together for about 6 or 7 seconds.

If successful, you should start to see a trainload of partitions start to load in your file manager (Nautilus).

6. Identify the destination path to the EMMC

If you’ve done my other blog where you install Ubuntu Touch to the SD card this time we’ll do the same thing but point the destination to the EMMC memory.

  • Hit the super key to the left of the space bar
  • Type ‘disks’ to open the ‘Disks’ utility. You should now see two items related to UT in the left pane of the ‘Disks’ app:

‘Drive | JumpDriv e microSD’

and

‘Drive | JumpeDriv e eMMC’

  • Click the ‘Drive | JumpeDriv e eMMC’ on the left pane
  • Take the path from the ‘Device’ path. In my case it looks like this “Device dev/sdc” but yours can – and very possibly will be – slightly different in the last letter. You can actually just highlight it and copy it from there

path: /dev/sdc

This is a great way to visually identify and confirm which mount point is your eMMC which will help you quickly deal with unmounting stuff in the next section

7. Unmount the partitions on the eMMC drive

If the drive is ‘fully ejected’ by Ubuntu, you will probably get this error when you run the final flashing command:

bmaptool: ERROR: cannot open destination file ‘/dev/sdc’: [Errno 123] No medium found: ‘/dev/sdc’

A reboot of the device again and putting it into ‘Yumi Mode’ again might fix it for you if you are lucky – once this worked for me – but that was it. I do not believe this can be done reliably and thus you should proceed with the command line method. It’s really not that hard if you work together with the Disks utility.

The reason why the ‘eject’ button in File Manager (Nautilus) doesn’t work for this purpose is because ‘it uses udisks ctl which both unmounts and ejects the partition’. We only want unmount, not eject, you see. So, we must ‘terminally unmount’ these as follows:

First, run the mount command which will show you (truthfully) what drives are (actually) mounted on the system. For me these were all located at the very bottom of the terminal output list.

You can, however, in the Disks utility, click through each partition and see at the bottom ‘mounted’ or ‘not mounted` or ‘unkown’ and compare against the terminal output to gain further confidence that you are unmounting and flashing to the right place. The Pinephone image has a mixture of all of these, it seems, so probably better to just manually unmount everything with the terminal. It’s really easy once you identify the actually-mounted drives you just run:

sudo umount /dev/sdc2' sudo umount /dev/sdc4′
etc
etc

**Tip **: If while running the final flashing commands you get this error:

bmaptool: ERROR: the image file ‘/home/wt/Downloads/ubuntu-touch-pinephone.img.xz’ has size 826.3 MiB and it will not fit the block device ‘/dev/sdc3’ which has 1.0 KiB capacity

You are probably writing to a partition like I did instead of the actual device. You probably put in /dev/sdc3 instead of /dev/sdc as an example.

8. Run the flashing command

This assumes you have alread set up and read and understand my blog above.

Format:

sudo bmaptool copy --bmap ~/path/where/your/bmap/file/is/located /path/where/your/image/is/located /path/to/memory/device

My actual example:

sudo bmaptool copy --bmap ~/Downloads/pphone.bmap ~/Downloads/ubuntu-touch-pinephone.img.xz /dev/sdc

Worked for me! Hope it works for you too 🙂