Sharetribe is a great, Almost Open Source project which enables one to put up a C2C marketplace in record time. Well, that record time is indubitable when compared to starting from scratch, but that doesn’t mean it’s effortless.
We’re going to set-up a Sharetribe instance on a Ubuntu VPS. Why Ubuntu? It’s easy and the most popular OS for VPSs. Why a VPS? Because the recommended Heroku/AWS deployment is going to nearer to $50/mo. We’re going to cut that by a factor of 12.
Most recent Ubuntu versions will be sufficient with this guide, albeit not tested. We all stand on the shoulders of giants, in this case Karibou’s great guide for Ubuntu 16.04. Not much is changed from his guide, just updated for the most recent versions and resolving some issues he didn’t get to before he, understandably, noped out. In other words, this guide is likely to work all the way down to Ubuntu 16 and possibly 14.
One little disclaimer: this guide is biased, opinionated, and I’m no expert. I did not develop Sharetribe and make some assumptions about it that may be wrong. This is simply what worked for me and my needs as a penny-pincher and, now, marketplace owner. There was a lot of pain involved which I would like to save you from, beloved reader.
Also, tutorials are boring. I’m going to write like a human to make it a little more fun, offering a little rope in case you find yourself in dependency hell.
ACHTUNG!
Should you use this guide?
Sharetribe’s SaaS has some advantages: installation, automatic upgrades, maintenance, security, and support. It’s quick and easy, just for a little money upfront as opposed to hosting the Almost Open Source yourself. They’ll even pay for your Google Maps API.
If you’ve never SSH’d into a server before, you’re looking in the wrong place. Check out Sharetribe.com for their SaaS options.
This guide is also not utterly complete. You’ll need to fill in many gaps, like getting your third-party APIs configured. You will almost certainly have to read errors and debug when issues crop up that I haven’t covered or new issues emerge with new versions. If you’re on the fence about that, you must choose whether or not to proceed.
The VPS
Do yourself a favor and grab a cheap KVM box from SSD Nodes. You might not need KVM, but I feel uncomfortable being unable to update kernels. We’re not using Docker or new-fangled portable software, so you’d likely be able to get away with OpenVZ or another, less flexible virtualization that will cost even less.
At the time of writing, I grabbed $79/year box with 24G RAM and 400G disk. All things said and done, this handles about 4800 concurrent connections in production with a relatively light database. Pretty good. Development Sharetribe destroys memory in no-time, craving around 4G, but a production server has a footprint closer to 800M. Mamcache, Redis, or MySQL can be offloaded to make this memory footprint even smaller. You don’t need a box this big, but SSD Nodes makes it so bloody cheap that you might as well.
First steps on the Server
I hope you’ve set up a VPS before because I’m not getting detailed here. We’ll cover the basics.
adduser sharetribeisprettygood adduser sharetribeisprettygood sudo su sharetribeisprettygood cd ~ sudo apt update sudo apt install software-properties-common nano vsftpd curl git build-essential libssl-dev curl imagemagick libxslt-dev libxml2-dev libmysqlclient-dev mailutils openssl sasl2-bin -y sudo apt update sudo apt dist-upgrade git clone git://github.com/sharetribe/sharetribe.git cd sharetribe git checkout latest
Look at that, you’ve cloned the repo. I took the liberty of adding most dependencies up here, many of which will only be needed much later.
Of course, the most important stuff has to be a little harder. We’re going to need to implement version control for Node and Ruby.
Node, NPM, NVM
Check out NVM for the latest install info, this changes often.
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh | bash
source ~/.profile
The version of Node supported by Sharetribe changes. Check the official installation guide for reference. Really not worth rocking the boat on this one, the errors you’d get from using an unsupported version will be cryptic. I’m just using the versions are the time of writing, definitely not keeping this up to date.
nvm install 10.15 nvm use 10.15
Run these to verify everything’s in place. NPM version will sync to Node, don’t mess with it. Please don’t run an npm update
, you hapless fool.
which node which npm node -v #should equal 10.15.3 npm -v #should equal 6.4.1
Special note for WSL. While using this guide to install on WSL to do some heavy modifications to the core of Sharetribe, I got a segmentation fault with npm / nvm. Couldn’t even check the version without a seg fault.
If this describes you, remove nvm with rm -rf ~/.nvm
. Uninstall node/npm with the following:
sudo apt --auto-remove purge nodejs
sudo apt --auto-remove purge npm
Now, we’re gonna use n
. It used to be my favorite, given that tight clean syntax, so we probably should’ve started there.
sudo apt install npm
sudo npm install -g n
sudo n 10.15.3
sudo n
# select the only version
#check node and npm
node -v
npm -v
# did npm segfault again? No problem
sudo apt --auto-remove purge nodejs
sudo apt --auto-remove purge npm
#check again
node-v
npm -v
# maybe you should run n again? Who knows
Ruby, RVM
Ruby’s our boy for this whole operation. For that reason alone, we must make sure it’s the version recommended by Sharetribe for your current install.
sudo apt-add-repository -y ppa:rael-gc/rvm sudo apt-get update sudo apt-get install rvm echo 'source "/etc/profile.d/rvm.sh"' >> ~/.bashrc sudo usermod -a -G rvm $USER sudo reboot
Oops, killed your window. Log in as your new user which might be `sharetribeisprettygood` and go to the ShareTribe directory with cd ~/sharetribe
. This will be tacitly assumed for all future reboots and any other time you might wander back to mess with Sharetribe.
rvm install 2.6.5
The above command can fail if you have any errors with sudo apt-get update
. It will also fail if you didn’t reboot.
rvm rubygems current
rvm use 2.6.5
MySQL
Looking for Mongo? I’m a little disappointed too.
Version won’t matter much here. As always, you can check back to the official docs and fret over it but it’s very unlikely to matter unless you decide to use an ancient version. We’ll go with Ubuntu’s current in the default repository.
sudo apt-get install mysql-server -y
It’s best to go with a big, scary root password if you’ll keep track of it.
sudo mysql_secure_installation
All the defaults are fine here, just keep in mind you’ll be re-entering your root password. Really don’t want that to be empty.
We’re going to log-in and make two databases and their users. The same could be applied to making staging
or test
servers, but I’m not going to do that. You can also share one database between all of these different stages of development which may have repercussions beyond what I’m aware of, though that is my personal configuration.
Details on all of this to follow, for now we’re just going to make our two databases. Please change PASSWORD below.
mysql -u root -p
# login
CREATE DATABASE sharetribe_development CHARACTER SET utf8 COLLATE utf8_general_ci;
CREATE DATABASE sharetribe_production CHARACTER SET utf8 COLLATE utf8_general_ci;
CREATE USER sharetribe@localhost;
SET PASSWORD FOR sharetribe@localhost= '____replace_with_your_password____';
******** OR, for 8.0+ ********
ALTER USER 'sharetribe'@'localhost' IDENTIFIED BY 'MY_PASSWORD_HERE';
************
GRANT ALL ON sharetribe_development.* TO 'sharetribe'@'localhost';
GRANT ALL ON sharetribe_production.* TO 'sharetribe'@'localhost';
EXIT
Done with SQL. We’ll prime these databases with the Sharetribe structure later. While the login credentials are still fresh in your mind, let’s set-up the ShareTribe configuration for it to connect to the databases:
cp config/database.example.yml config/database.yml nano config/database.yml
Sphinx
Sphinx is how we’re going to be searching the Sharetribe database for things like listings. It runs it’s own process, soon to be a service, and it’s necessary for your marketplace though you may not get obvious errors if it’s not running.
We’re not going to fret about versions here, but you could if you run into issues.
sudo apt-get install sphinxsearch
Installing Sharetribe
This is it! The big build.
gem install bundler bundle install npm install
Each of these steps will take aeons, potentially outliving you. Expect that.
Have an error? Check the dependencies. Try to find the most appropriate line of the stack trace and search the community forums.
A recent install got an error because of a yanked dependency:
Your bundle is locked to mimemagic (0.3.5), but that version could not be found in any of the sources listed in your
Gemfile. If you haven't changed sources, that means the author of mimemagic (0.3.5) has removed it. You'll need to
update your bundle to a version other than mimemagic (0.3.5) that hasn't been removed in order to install.
If this happens for you, you need to modify the Gemfile
to include a Git hash for this yanked version:
gem 'mimemagic', github: 'mimemagicrb/mimemagic', ref: '01f92d86d15d85cfd0f20dabd025dcbd36a8a60f'
I put this above Rails since this is a Rails dependency.
I noted that I got an error with mkdir
while installing gems. The error was something like mkdir: command not found
. Preposterous, right? Pretty basic command.
It was because Ruby was looking in /bin/mkdir
instead of /usr/bin/mkdir
. If you have the same problem, let’s symlink it:
sudo ln -s /bin/mkdir /usr/bin/mkdir
A recent build of mine had some cryptic errors with npm install
. The only one I saw clearly was about hash integrity. A second run of npm install
ran without any errors, so… that’s fine, right?
Let’s to get work on initializing the configuration files, soon to be your well-known and perhaps disliked acquaintance.
The “main” configuration file is actually just an opportunity for overrides. We need to copy this file into place:
cp config/config.example.yml config/config.yml
We don’t need to touch it now. Let’s prep some database structures:
bundle exec rake db:create db:structure:load RAILS_ENV=production bundle exec rake db:create RAILS_ENV=production bundle exec rake db:structure:load
N.B., The default RAILS_ENV is development.
We’ve just structured our two databases. I could be wrong, but I believe they have the same structure, so keep in mind you can dump and import one into the other if that’s useful to you.
ERROR: bundle exec rake db:create db:structure:load
$ bundle exec rake db:create db:structure:load rake aborted! Aws::Sigv4::Errors::MissingCredentialsError: Cannot load `Rails.config.active_storage.service`: missing credentials, provide credentials with one of the following options: - :access_key_id and :secret_access_key - :credentials - :credentials_provider
Look familiar? I had this issue when I made my first instance and was informed the the default option was changed. (See Comments for a link.) This may not yet be the case.
The problem is because Sharetribe is looking for AWS and failing to find credentials. It should be using the local hard drive, especially for that critical first run. So, we need to set this option in the config files, just not our normal config file.
sudo nano config/environments/development.rb
...
# config.active_storage.service = :amazon
...
If you use AWS later, you just remove the comment. The comment causes it to default to :local, the option you’d like to use initially. Keep in mind, there are two instances of this :amazon option, for whatever reason, so just make sure config.active_storage.service’s final value will be :local. Naturally, you can just change :amazon
to :local
: just remember that you changed it.
I’m putting this here since I’m asking you to run the DB:create command and this can break it. It’s covered in more detail under “Image Hosting”.
Running Sharetribe
Sharetribe runs by using three distinct processes: an HTTP/Ruby server, a Sphinx search process, and a Delayed Jobs Worker.
Running all three concurrently will give you a functioning marketplace, no more no less, though the HTTP server is the heart and soul: its looks, its functions, everything you’d think of. The Delayed Jobs Worker handles deferred queries and emails. The Sphinx process, as aforementioned, is primarily for listing search.
We can do this using the queries provided by the official sharetribe documentation. This is a good for a first boot-up and can provide useful logging right into your terminal. The problem is that it takes three terminal windows.
We’re going to go right into making these processes controllable via SystemD. These processes will be stoppable, startable, and will log directly into journalctl
.
SystemD
Three processes, three files. We’ll add a bonus file for a different HTTP server on the same port as production for development/debugging.
Let’s make our production server service. In /etc/systemd/system/sharetribe_http.service
, via sudo nano /etc/systemd/system/sharetribe_http.service
:
[Service]
WorkingDirectory=/home/sharetribeisprettygood/sharetribe
ExecStartPre=/bin/bash -lc 'source /home/sharetribeisprettygood/.rvm/scripts/rvm && bundle exec rake assets:precompile'
ExecStart=/bin/bash -lc 'source /home/sharetribeisprettygood/.rvm/scripts/rvm && bundle exec unicorn'
Restart=always
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=sharetribeHttp
User=sharetribeisprettygood
Group=sharetribeisprettygood
Environment=RAILS_ENV=production NODE_ENV=production
[Install]
WantedBy=multi-user.target
Notice the many instance of my absurd username. You’ll need to replace those if you didn’t just roll with it.
This server is going to use Unicorn, an HTTP server we haven’t installed yet.
Here’s our development server, configured to run on the same port as Unicorn will (:8080).
sudo nano /etc/systemd/system/sharetribe_http_dev.service
[Service]
WorkingDirectory=/home/sharetribeisprettygood/sharetribe
ExecStartPre=/bin/bash -lc 'source /home/sharetribeisprettygood/.rvm/scripts/rvm && bundle exec rake assets:precompile'
ExecStart=/bin/bash -lc 'source /home/sharetribeisprettygood/.rvm/scripts/rvm && foreman start -f Procfile.static -p 8080'
Restart=always
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=sharetribeHttpDev
User=sharetribeisprettygood
Group=sharetribeisprettygood
Environment=RAILS_ENV=development NODE_ENV=development
[Install]
WantedBy=multi-user.target
sudo nano /etc/systemd/system/sharetribe_search.service
[Service]
Type=forking
User=sharetribeisprettygood
Group=sharetribeisprettygood
WorkingDirectory=/home/sharetribeisprettygood/sharetribe
After=mysqld.service
ExecStartPre=/bin/bash -lc 'source /home/sharetribeisprettygood/.profile && source /home/sharetribeisprettygood/.rvm/scripts/rvm && bundle exec rake ts:index'
ExecStart=/bin/bash -lc 'source /home/sharetribeisprettygood/.profile && source /home/sharetribeisprettygood/.rvm/scripts/rvm && bundle exec rake ts:start'
PassEnvironment=
Restart=on-failure
RestartSec=10
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=sharetribeSearch
Environment=RAILS_ENV=production NODE_ENV=production
PIDFile=/home/sharetribeisprettygood/sharetribe/log/production.sphinx.pid
[Install]
WantedBy=multi-user.target
sudo nano /etc/systemd/system/sharetribe_work.service
[Service]
WorkingDirectory=/home/sharetribeisprettygood/sharetribe
ExecStart=/bin/bash -lc 'source /home/sharetribeisprettygood/.profile && source /home/sharetribeisprettygood/.rvm/scripts/rvm && bundle exec rake jobs:work'
Restart=always
RestartSec=10
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=sharetribeWork
User=sharetribeisprettygood
Group=sharetribeisprettygood
Environment=RAILS_ENV=production NODE_ENV=production
[Install]
WantedBy=multi-user.target
sudo systemctl disable sharetribe_http sudo systemctl enable sharetribe_http_dev sudo systemctl enable sharetribe_search sudo systemctl enable sharetribe_work sudo systemctl daemon-reload
Keep in mind that we’re either running the development HTTP server or the production HTTP server at one time. We can use the following commands to start, stop, and restart these services at will, which will become especially important if you want to switch from production to development or vice-versa:
sudo systemctl start [service name here] sudo systemctl stop [service name here] sudo systemctl restart [service name here]
Naturally, all of these services boot at start-up, except for sharetribe_http
. When you’re ready to switch to production, we just disable sharetribe_http_dev
and enable sharetribe_http
.
So, we’ve automated these processes, but what about debugging their logs? Development logs much more than Production, so you can make a quick switch over and replicate the error to get it fresh in the log. (It may be advisable to make config/database.yml
have development
connect to the production database if you’re debugging after moving to production.)
SystemD logs in realtime to syslog, but is more easily accessed using sudo journalctl -xe
.
HAProxy, Let’s Encrypt, Adminer
Great, so I’ve been through all of this without seeing my site and now this chump has me setting it to port 8080. I haven’t even seen it yet.
~You
We’re going to start right, trust me. Auto-renewing SSL right from the start, with SSL termination via HAProxy. HAProxy is going to conveniently link to our HTTP server on port 8080 as well as permit us to connect to Apache/PHP to easily view and edit the database via Adminer. That may or may not sound complex, but it’s not that bad. It’s my preferred set-up for a reason and my IQ is chilly on a good day.
sudo apt-get install certbot
I believe it’ll ask for an email. Aside from that, the configuration will be the default.
sudo certbot certonly --standalone sudo rm /etc/letsencrypt/live/README
Type in your domain name, without the www. Ditching the www is trendy right now (it is rather superfluous), though we’ll get one for the www next.
sudo apt-get install haproxy sudo mkdir -p /etc/haproxy/certs DOMAIN='YOURDOMAINNAMEHERE.com' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /etc/haproxy/certs/$DOMAIN.pem' sudo chmod -R go-rwx /etc/haproxy/certs sudo cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak sudo nano /etc/haproxy/haproxy.cfg
# Leave almost everything as it is. Our interests begin with the "frontend" part of the file. Change the "frontend" and "backend" portions to reflect these lines
frontend www-http
bind YOUR.IP.ADDRESS.HERE:80
reqadd X-Forwarded-Proto:\ http
default_backend www-backend
frontend www-https
bind YOUR.IP.ADDRESS.HERE:443 ssl crt /etc/haproxy/certs/
reqadd X-Forwarded-Proto:\ https
acl letsencrypt-acl path_beg /.well-known/acme-challenge/
use_backend letsencrypt-backend if letsencrypt-acl
acl is_sharetribe hdr_end(host) -i YOURDOMAINNAMEHERE.com
use_backend sharetribe-backend if is_sharetribe
default_backend www-backend
backend www-backend
redirect scheme https if !{ ssl_fc }
server www-1 127.0.0.1:8181 check
backend www-http-backend
server www-1 127.0.0.1:8181 check
backend sharetribe-backend
server www-1 127.0.0.1:8080 check
backend letsencrypt-backend
server letsencrypt 127.0.0.1:54321
We’re doing a few things here, so let’s review.
We’re redirecting all HTTP requests to HTTPS. We’re giving access to Let’s Encrypt on port 54321. Finally, we’re talking all requests to our domain name for our marketplace to the HTTP server we set up on port 8080.
Because HAProxy will terminate the SSL process, our marketplace will now be compatible with HTTPS connections. Yet, because we’re matching all subdomains of our domain name, we won’t be able to access Apache (and thus PHP) for that domain name.
Let’s change Apache’s port and set it up with PHP for Adminer. While we’re at it, let’s make Adminer available from a browser.
sudo nano /etc/apache2/ports.conf
Listen 8181
You can leave or remove the SSL bits about listening on 443. This will clash with HAProxy if you enable SSL for Apache, but you shouldn’t do that anyway.
Now, Adminer. Adminer is the best: quick, small, easy. (PHPMyAdmin is rubbish.) Naturally, we’ll need PHP as well so we’ll hammer that out quickly, albeit with probably more extensions than are strictly required.
sudo add-apt-repository ppa:ondrej/php sudo apt-get update sudo apt-get install php7.3 php7.3-mysql php7.3-curl php7.3-gd php7.3-intl php-pear php7.3-imagick php7.3-imap php7.3-memcache php7.3-ps php7.3-pspell php7.3-recode php7.3-snmp php7.3-sqlite php7.3-tidy php7.3-xmlrpc php7.3-xsl libapache2-mod-php7.3 sudo mkdir /usr/share/adminer sudo wget "http://www.adminer.org/latest.php" -O /usr/share/adminer/latest.php sudo ln -s /usr/share/adminer/latest.php /usr/share/adminer/adminer.php echo "Alias /adminer.php /usr/share/adminer/adminer.php" | sudo tee /etc/apache2/conf-available/adminer.conf sudo a2enconf adminer.conf sudo service apache2 restart
This installed Adminer and made it available from Apache at any domain that points to your server in the subfolder /adminer.php
. Since HAProxy won’t let YOURDOMAINNAME go to Apache, we’ll use your raw IP address instead. This requires bypassing a warning on your browser about the SSL certificate (you can’t get one for an IP), but I don’t care. If I’m being an idiot you could send any other domain to the server and use it that way or make an exception in HAProxy.
To access Adminer (which we don’t need to do now), you can see it here:
https://YOUR.IP.ADDRESS.HERE/adminer.php
To get that new cert for our www or any other domains(i.e., www.YOURDOMAIN.com), we’ll use these commands:
sudo certbot certonly --standalone --preferred-challenges http --http-01-port 54321 -d www.YOURDOMAINNAMEHERE.com DOMAIN='www.YOURDOMAINNAMEHERE.com' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /etc/haproxy/certs/$DOMAIN.pem' sudo service haproxy restart
Hmm, you may wonder, didn’t he promise these certificates would auto-renew? Never fear, intrepid friend, we’re going to make a quick shell script for that and throw it on Cron:
sudo nano /usr/local/sbin/le-renew-haproxy
#!/bin/bash
certbot renew --standalone --preferred-challenges http --http-01-port 54321
for i in $( ls /etc/letsencrypt/live ); do
web_service='haproxy'
domain=$i
http_01_port='54321'
combined_file="/etc/haproxy/certs/${domain}.pem"
exp_limit=30;
echo ""
echo "############################################################"
echo "######## $domain : ($days_exp days left) ########"
echo "############################################################"
echo ""
cert_file="/etc/letsencrypt/live/$domain/fullchain.pem"
key_file="/etc/letsencrypt/live/$domain/privkey.pem"
if [ ! -f $cert_file ]; then
echo "[ERROR] certificate file not found for domain $domain."
fi
exp=$(date -d "`openssl x509 -in $cert_file -text -noout|grep "Not After"|cut -c 25-`" +%s)
datenow=$(date -d "now" +%s)
days_exp=$(echo \( $exp - $datenow \) / 86400 |bc)
echo "Creating $combined_file with latest certs..."
sudo bash -c "cat /etc/letsencrypt/live/$domain/fullchain.pem /etc/letsencrypt/live/$domain/privkey.pem > $combined_file"
echo "Renewal process finished for domain $domain"
done
echo "Reloading $web_service"
/usr/sbin/service $web_service reload
sudo chmod 007 /usr/local/sbin/le-renew-haproxy sudo crontab -e
0 1 * * * /usr/local/sbin/le-renew-haproxy >> /var/log/letsencrypt-renewal.log
For the Cron-challenged, such as myself, this is running our renewal check every night at 1AM. (You might want an appropriate timezone for your server to make sure this is actually 1AM, i.e., during off-hous.) Wake up to a HTTPS error? Check /var/log/letsencrypt-renewal.log
for answers. Actually, consider checking it tomorrow to make sure it ran well.
See Your Site
One last thing: some basic configuration. We’re going to dive into the “real” configuration file, config.defaults.yml
. It’s big so you may want to look at it over sFTP but it’s also clean and readable, so you can get away with nano.
cp config/config.defaults.yml config/config.defaults.yml.original nano config/config.defaults.yml
# Big file here. Nano's find feature (Ctrl+W) is your friend.
domain: YOURDOMAINNAMEHERE.com
community_not_found_redirect: https://YOURDOMAINNAMEHERE.com
sharetribe_mail_from_address: "noreply@YOURDOMAINNAMEHERE.com"
feedback_mailer_recipients: "YOUREMAILHERE@EMAIL.COM"
show_landing_page_admin: true
app_encryption_key: ABUNCHOFRANDOMSTUFFTHATISTOTALLYARBITRARYBUTSHOULDNTBELOST
Let’s reload the config by restarting the HTTP server:
sudo systemctl restart sharetribe_http_dev
At long last, you can finally see your site at your domain name.
If you can’t, make sure an HTTP server, Worker, and Search processes are enabled. You’ll also want HAProxy listening on :443 and :80. You can see them on their ports using the following:
sudo netstat -plantu
If you’re looking just for the services we wrote, you can see them using the following command. (You can search for things like the process name or enabled
/ disabled
.)
systemctl list-unit-files | grep WHATEVER_YOURE_LOOKING_FOR
Feel free to fill out the basics for the marketplace and/or work through the “Getting Started” guide. We’re almost done here, anyway. The biggest to-do is setting up Sendmail, making image hosting work, doing some Stripe stuff, and setting up the Rake secret for production. Pretty lame, really. You won’t care about these until something’s broken anyway.
Image Hosting
ShareTribe is made for hosting on Heroku and AWS, i.e. overpriced hosting services. We, the privileged VPS elite, glean both benefit and detriment from this arrangement. On the negative side, ShareTribe does not default to hosting images on the server it’s installed on. We can use this to our advantage in a leaden production server, but for now it is an unabated evil.
Let’s change that.
nano config/environments/development.rb
# find ALL instances of the following:
config.active_storage.service = :amazon
# and change it to:
config.active_storage.service = :local
Do the same here if you want local image hosting on production:
nano config/environments/production.rb
Is that enough? Not quite. Email images will only use relative file paths (i.e., be broken) without setting some settings.
cp config/config.defaults.yml config/config.defaults.yml.original nano config/config.defaults.yml
...
# Big file here. Nano's find feature (Ctrl+W) is your friend.
...
asset_host: YOURDOMAINNAMEHERE.com
user_asset_host: https://YOURDOMAINNAMEHERE.com
Rake Secret
Shhh…. Your Rake Secret will define critical paths for your assets when the app is built. Don’t lose it! Even if you replace it, keep it somewhere safe just in case.
The Rake Secret is essential to build for production, so we’ll get it and place it in the overriding config file but just for production. (There’s also a secret for development, but it’s defined in config.defaults.yml
.)
cd ~/sharetribe rake secret nano config/config.yml
production:
secret_key_base: # add the generated key
Stripe
So, you want to become a millionaire? Stripe is the first step.
Unfortunately, it suffers from some well-known issues in the default build. First, the payments settings in the Admin section are hidden by default. Let’s fix that:
rails console
TransactionService::API::Api.processes.create(community_id: 1, process: :preauthorize, author_is_seller: true) and TransactionService::API::Api.settings.provision(community_id: 1, payment_gateway: :stripe, payment_process: :preauthorize, active: true)
The regex used to pre-validate Stripe keys after their input in the Admin section did not work for me. On the plus side, this regex is exposed as a setting:
nano config/config.defaults.yml
stripe_private_key_pattern: "sk_(test|live)_.{24}"
stripe_publishable_key_pattern: "pk_(test|live)_.{24}"
# We did this earlier, but double-check that this field is filed with a random string such as the one below
app_encryption_key: "2617463e32b65ec6645b3cb661f9686f2ab608c18ac930ee1295a1c1170504ed"
Restart the development server. Now, with the usual Stripe set-up, you can configure Stripe from the Admin area.
Great, you say, I’ll plug in my test keys now. Sure… but you won’t be able to change to the live keys later without a little help.
When it’s time to change, we’ll look at the database and find the table payment_settings
. Delete whatever’s in the following fields: `api_private_key
, api_publishable_key
, and api_visible_public_key
. Set api_verified
to 0, i.e., false. Reboot the http server. From the Admin area you’ll now be able to set your live keys or switch back to test from live.
Ah, terriffic, you say. Now I’m all set to issue payouts!
Not quite, my ambitious friend. You’ll need to enable Stripe Connect. ShareTribe’s official docs handle this quite well since it’s also a necessity for their paid hosting. See that doc here.
Sendmail
Finally, we arrive at the deepest level of hell, where the torture is so cruel that even Satan blushes: Sendmail.
Sendmail is just like Postfix, except that it doesn’t work and is a complete pain to configure. ShareTribe will use SMTP and third-party email services, but “it will still use Sendmail for some services, so make sure it’s configured”. I don’t know when it falls back to Sendmail though, frankly, it’d be easier to change those services than get Sendmail fully compliant, so we’ll embark on this dark path together.
First, we’re going to want to send mails from the marketplace’s domain name. That means getting yourself an email solution. (The true penny-pinchers will get another $50/year VPS and set up Mailcow, easily handling every domain you could need.) Once we have that email service configured to allow remote connection via SMTP on port 587, it’s time to set-up Sendmail to connect to it.
sudo apt-get install sendmail mailutils sendmail-bin sudo mkdir -m 700 /etc/mail/authinfo/ cd /etc/mail/authinfo/ sudo nano auth-file
AuthInfo:YOUR.EMAIL.SERVER "U:root" "I:YOUR GMAIL EMAIL ADDRESS" "P:YOUR PASSWORD"
sudo makemap hash /etc/mail/authinfo/auth-file < /etc/mail/authinfo/auth-file sudo nano /etc/mail/sendmail.mc
# find "include(`/usr/share/sendmail/cf/m4/cf.m4')dnl" at the beginning, because this line MUST go right after it
include(`/etc/mail/tls/starttls.m4')dnl
...
# find "dnl # Default Mailer setup" because these lines go right after it
define(`SMART_HOST',`[YOUR.MAIL.SERVER]')dnl
define(`RELAY_MAILER_ARGS', `TCP $h 587')dnl
define(`ESMTP_MAILER_ARGS', `TCP $h 587')dnl
define(`confAUTH_OPTIONS', `A p')dnl
TRUST_AUTH_MECH(`EXTERNAL GSSAPI DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl
define(`confAUTH_MECHANISMS', `EXTERNAL GSSAPI DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl
define(`confDEF_AUTH_INFO', `/etc/mail/authinfo/auth-file')dnl
FEATURE(`authinfo',`hash -o /etc/mail/authinfo/auth-file.db')dnl
DAEMON_OPTIONS(`Port=587, Name=MSA, M=E')dnl
define(`MAIL_HUB', `YOUR.MAIL.SERVER.')dnl
define(`LOCAL_RELAY', `YOUR.MAIL.SERVER.')dnl
Boy, that’s weird syntax. Note that every quoted portion begins with a backtick and ends with a single quote. MAIL_HUB
and LOCAL_RELAY
have a dot after the FQDN. We added SASL support at the beginning and need to do the same in another file:
sudo nano /etc/mail/submit.mc
# find "include(`/usr/share/sendmail/cf/m4/cf.m4')dnl" becuase this line MUST go exactly one line after it
include(`/etc/mail/tls/starttls.m4')dnl
su adduser noreply nano /home/noreply/email.text
Subject: test
test test
test
.
Now, we’ll test from the command line.
sendmail -O LogLevel=99 -v -f "noreply@YOURDOMAINNAME.com" YOUR@WORKINGEMAILADDRESS.com < /home/noreply/email.txt
Did it send? I hope so. Try testing local users, like yourname@yourdomainname.com, assuming such is set-up. This configuration should pass that test, but Sendmail is very picky about trying to find local users.
Last things last: add Sendmail to your config and reboot your http server before testing out ShareTribe’s emails.
nano config/config.defaults.yml
...
mail_delivery_method: "sendmail"
...
The easiest emails to test are the invitations and feedback (the latter appearing on the default “Contact us” page). Make sure these are getting through and looking pretty (see Image Hosting above).
Most emails from ShareTribe will be from transactions, so set-up those test keys and start firing away. The emails are well-written, but some have issues with the English locale (the only one I tested). If you need to change these, it’s done by modifying the config/locales/en.yml
file.
Unicorn
Unicorn is an HTTP server for Ruby that’s pretty good. I don’t know if it’s the best, but benchmarks definitely indicate it’s pretty good. The development server seems to use Puma (by its process name) or maybe Passenger (after looking at the code). Frankly, I have no idea: all I know is that my production tests using the default HTTP server handled about 50 concurrent users and, with Unicorn, handles at least 600.
gem install unicorn sudo nano config/unicorn.rb
# set path to application
app_dir = File.expand_path("../..", __FILE__)
shared_dir = "#{app_dir}/shared"
working_directory app_dir
# Set unicorn options
# worker_processes should be set equal to the number of cores your machine has, visible with 'lscpu'
worker_processes 3
preload_app true
timeout 30
# Set up socket location
listen "#{shared_dir}/sockets/unicorn.sock", :backlog => 64
# Logging
stderr_path "#{shared_dir}/log/unicorn.stderr.log"
stdout_path "#{shared_dir}/log/unicorn.stdout.log"
# Set master PID location
pid "#{shared_dir}/pids/unicorn.pid"
Pretty nice. That’s really it for Unicorn, especially since we set-up the service already.
Maybe you’re curious about performance? Check out the memory footprint between the two servers by stopping/starting the appropriate services and running htop
:
sudo apt install htop sudo htop
Cronjobs / Maintenance
In production, we need to wipe a few things from the server from time to time. We’re going to set up some Cronjobs for this.
sudo crontab -e
5 1 * * * /bin/bash -lc 'source /home/sharetribeisprettygood/.rvm/scripts/rvm && bundle exec rails runner ActiveSessionsHelper.cleanup'
15 1 * * * /bin/bash -lc 'source /home/sharetribeisprettygood/.rvm/scripts/rvm && bundle exec rails runner CommunityMailer.deliver_community_updates'
25 1 * * * /bin/bash -lc 'source /home/sharetribeisprettygood/.rvm/scripts/rvm && bundle exec rake sharetribe:delete_expired_auth_tokens'
These are spaced out by 10 minutes at 1AM or so, though these processes probably won’t take that long on all but the most burdened servers. The Cron user is naive to our profile, so we source it first and then run the bundle command.
Note that if you’re using PayPal or AWS’s SES there are two additional jobs to add, according to the docs. These will use the same format as above, but aren’t relevant to our set-up here.
Icons
Some FontAwesome icons are dated. This is because Sharetribe Go is hosted as a paid service by its developers, who use a proprietary iconset called SS-Pika and SS-Social. While FontAwesome (free) is enabled by default, only some of the modern icons from FontAwesome have been mapped onto the original SS-Pika classes. To be clear, the older version of FontAwesome included in Sharetribe should be working normally.
We have two options here. You can install SS-Pika ($89) or update FontAwesome and map some of Font-Awesome on to SS-Pika classes. I did the former, so that will receive a little more coverage here.
Find the following files:
ss-pika.eot ss-pika.woff ss-pika.svg ss-pika.ttf ss-social-regular.woff ss-social-regular.eot ss-social-regular.svg ss-social-regular.ttf
All of these files need to find their way to app/assets/stylesheets/
. From there, we need to change a configuration option:
nano config/config.defaults.yml
...
icon-pack: "ss-pika"
...
This should be enough, with the path defaults being sufficient. That said, I still had trouble, corrected by hardcoding the path:
nano app/assets/stylesheets/fonts.scss.erb
@charset "UTF-8";
<%= (APP_CONFIG.icon_pack == "font-awesome" ? "@import 'font-awesome.min';" : "@import 'ss-social';\n@import 'ss-pika';").html_safe %>
I think this is our first official deviation from source. Not great, diffs are definitely in your future. Nonetheless, this change will still preserve your ability to switch back to FontAwesome by using the config option.
If you’d like to map more of FontAwesome on to SS-Pika, the mapping file is app/view_utils/icon_map.rb
. The syntax is "fontAwesomeName" => "ss-iconName",
. In the current version of FontAwesome, “clock” has changed to “clock-o”. This can be remedied in the mapping file by changing “clock” in the left column to its updated name.
Remote Image Hosting via AWS
Remember my teensy 40G hard disk? It’s time to upgrade.
I was going to make a localstack instance to emulate AWS and save money. I ended up over my head and when I looked at AWS’s pricing, it really wasn’t that bad. Localstack remain an interesting idea for some enterprising sysAdmin.
This section is incomplete. I failed to write down my steps, instead banging my head against a wall for a day or two. I do recall some big errors I had, so I’ll record those here.
First, change the development and production config files to :amazon
. (See Image Hosting for the specifics.)
Now, I’m going to get pretty vague. Make an AWS account. Make two AWS buckets, in compliance with the instructions from Sharetribe. Add the CORS policies. Add the keys to the main config file. Something about an IAM user. (Biblical?)
Sorry, it pains me to be so ambiguous. I’ve really let you down here, but you can do it. Stiff upper lip and all.
I had two big problems: bad pathnames to images and failure to use Sig v3 decryption.
The pathnames, at least for my server area, were broken.
I used region us-east-2
and the path-building in config/appplication.rb
was broken. It built the path with a hyphen instead of a dot, resulting in a lot of confusion on my end. (It was subtle enough that I screwed around with bucket policies for hours before I found it.) Here’s a contrast:
Img Src on ST: https://atlastalked-permanent-images.s3-us-east-2.amazonaws.com/images/communities/wide_logos/1/header/atlastalked-widelogo.png?1596989181
Img Path on S3: https://atlastalked-permanent-images.s3.us-east-2.amazonaws.com/images/communities/wide_logos/1/original/atlastalked-widelogo.png
Changing application.rb
to reflect the path from S3 fixed this. Note, there’s another inconsistency between these paths, but… I forgot how I fixed this. Sorry.
Next, I suffered endless torment for the hubris of attempting to use S3. I spent a day on some forgotten error before I discovered that apparently all regions but US East 1 use a new kind of encryption that Sharetribe doesn’t yet support. See here.
I had and have no idea what this means, but, through his endless benevolence, our savior Eugene Key has provided a flawless copy and paste solution. I’ll both link it and copy it here, just in case.
app/services/s3_uploader.rb
class S3Uploader
def initialize()
@aws_access_key_id = APP_CONFIG.aws_access_key_id
@aws_secret_access_key = APP_CONFIG.aws_secret_access_key
@bucket = APP_CONFIG.s3_upload_bucket_name
@acl = "public-read"
@expiration = 10.hours.from_now
@s3_region = APP_CONFIG.s3_region
@current_dt = DateTime.now
@policy_date = @current_dt.utc.strftime("%Y%m%d")
@x_amz_date = @current_dt.utc.strftime('%Y%m%dT%H%M%SZ')
@x_amz_algorithm = "AWS4-HMAC-SHA256"
@x_amz_credential = "#{@aws_access_key_id}/#{@policy_date}/#{@s3_region}/s3/aws4_request"
end
def fields
{
:key => key,
:acl => @acl,
:success_action_status => 200,
'X-Amz-Credential': @x_amz_credential,
'X-Amz-Algorithm': @x_amz_algorithm,
'X-Amz-Date': @x_amz_date,
'X-Amz-Signature': signature,
'Policy': policy
}
end
def url
"https://#{@bucket}.s3.amazonaws.com/"
end
private
def url_friendly_time
Time.now.utc.strftime("%Y%m%dT%H%MZ")
end
def year
Time.now.year
end
def month
Time.now.month
end
def key
"uploads/listing-images/#{year}/#{month}/#{url_friendly_time}-#{SecureRandom.hex}/${index}/${filename}"
end
def policy
Base64.encode64(policy_data.to_json).gsub("\n", "")
end
def policy_data
{
expiration: @expiration.utc.iso8601,
conditions: [
["starts-with", "$key", "uploads/listing-images/"],
["starts-with", "$Content-Type", "image/"],
["starts-with", "$success_action_status", "200"],
["content-length-range", 0, APP_CONFIG.max_image_filesize],
{"x-amz-algorithm" => @x_amz_algorithm },
{"x-amz-credential" => @x_amz_credential },
{"x-amz-date" => @x_amz_date},
{bucket: @bucket},
{acl: @acl}
]
}
end
def get_signature_key( key, date_stamp, region_name, service_name )
k_date = OpenSSL::HMAC.digest('sha256', "AWS4" + key, date_stamp)
k_region = OpenSSL::HMAC.digest('sha256', k_date, region_name)
k_service = OpenSSL::HMAC.digest('sha256', k_region, service_name)
k_signing = OpenSSL::HMAC.digest('sha256', k_service, "aws4_request")
k_signing
end
def signature
signature_key = get_signature_key( @aws_secret_access_key, @policy_date , @s3_region, "s3")
OpenSSL::HMAC.hexdigest('sha256', signature_key, policy )
end
end
One last thing that might help those unfamiliar with S3/AWS, I ran through a dozen bucket policies before I found the right one. It warned that my whole lineage would be cursed if I continued, that the bucket would become… PUBLIC!
Of course, this is what you want. The bucket has to be public to show images to anyone who wants to see them. Here was my final bucket policy (the same for all buckets):
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::YOURBUCKETNAME-permanent-images/*"
}
]
}
Ah, when looking for that bucket policy for you I found out that I made three buckets, one additional one for landing pages. I have no recollection there, but apparently that’s necessary. Sorry!
Custom Landing Page
I made a short guide on the community forum regarding the headaches I had with the custom landing page: https://www.sharetribe.com/community/t/landing-pages-for-idiots-like-me/2788
Feel free to message me on there if you want quick(ish) help.
Making an App. (Really)
This is big. Having a native app will take you a lot further with your marketplace.
Yet, there’s no API for ShareTribe Go. How can read the listing, make a new listing, or initiate a transaction? Really, you can’t do much of anything without an API.
Could we make an API? Absolutely. But I’m not that clever nor am I willing to spend the time to become that clever.
Instead, I’ll point you to Jasonelle, a fork for continued maintenance of the abandoned Jasonette.
Abandoned, you say? Doesn’t sound good… yet, Jasonelle/ette is great at doing a very simple thing: using a very subtle webview to mimic native app behavior. Put another way, we can just package the website as an app.
A few changes are needed to get approval to publish to the iOS app store (Android isn’t as picky). Mainly, you need to change some CSS to look like a native app.
Many more changes are necessary to use the native input/selection features of real apps. This does require some more in-depth tinkering.
Cruelly, I’m going to deprive you of this as a step-by-step tutorial to use Jasonelle to accomplish this. It’d take a tutorial as long as this one, but I’ll consider parting with my secrets (or just doing it for you) for the right price. If you made it this far, you’ve got a fighting chance.
Bye
Great working with you, hope everything’s up to snuff. If it’s not, email me. This is an ancient site that I rarely check, a home only for spam and attempted malware infiltration. I might even be able to help with your site for a little silver. That’s right, this is the end of the free ride but I’ll make a marginal effort to update this guide if I’m notified of issues.
i.am@jeremydavidevans.com
More practical than in github. Thank you.
Hello,
I was following your tutorial but I am stuck.
These steps :
bundle exec rake db:create db:structure:load
RAILS_ENV=production bundle exec rake db:create
RAILS_ENV=production bundle exec rake db:structure:load
I get aborted because of some credentials missing.
See here: http://pastebin.fr/65216
Nice to meet you, glad to see this was helpful.
This is an error I had as well: https://github.com/sharetribe/sharetribe/issues/4139
I was told it was switched to :local in more recent versions, but perhaps this isn’t the case. I’d advise commenting out the second declaration of :amazon in the following file.
config/environments/development.rb:
# config.active_storage.service = :amazon
If you use AWS later, you just remove the comment. The comment causes it to default to :local, the option you’d like to use initially. Keep in mind, there are two instances of this :amazon option, for whatever reason, so just make sure config.active_storage.service’s final value will be :local.