Exporting create procedure to text file in MySQL

So there’s this handy dandy code:

mysql› show create procedure myProcedure; 

That will output the recipe with which myProcedure was created.  But what if you want to export 1200 lines of code to a text file.  You do that from terminal.

$ mysql -h example.com -P 3306 -u myUser -p -e "show create procedure myDatabase.myProcedure" ›› myExport.txt 

So in this case, -h is the host, you can put an IP address, domain, or AWS dns there, -P is the port, -u is the user, lowercase -p is the password (leave it blank to be prompted to enter it), then -e says run this query, and >> says put the contents in this file.  Note that you will have to add the database name to avoid the no database selected error.

If you are just connecting to a local MySQL server, leave the hostname out and port out, and use sudo credentials, like this:

$ sudo mysql -u myUser -p -e "show create procedure myDatabase.myProcedure" ›› myExport.txt 

Open the file with your Vim editor.

That brings up something quasi like this.

create procedure myProcedure\nbegin\nselect * from myTable\nend

All of the \n’s can be a problem if you are exporting thousands of lines of code.  For that, Vim to the rescue.  Type into your Vim editor:

:%s /\\n/\r/g

You need two backslashes on n like \\n because the export has escaped the backslash already.  But you only use 1 backslash on \r because you want and actually line break, not to print the characters.  and the final /g allows you to search through one very long line of text.

To find your line breaks again it is a search for \n with one backslash.

How to filter blue light on a Raspberry Pi (Raspbian)

so I had a conundrum. I wanted a blue light filter like Night Light on Windows and ChromeOS or Night Shift on OSx. But nothing would do it on Raspberry Pi. I even tried to RDP into one of my other computers to use its night light, but apparently those features disappear through Chrome Desktop Connect. So here was my solution. It has limitations.

This only works for Chrome and Chromium Browser afaik.
This only works for what is viewed via browser.
If you view another desktop through your browser through Remote Desktop Connect, you can switch to fullscreen but you have to go through your chrome menu and not the Chrome Remote Desktop menu.

So it was Chrome Web Store to the rescue. I downloaded an app called Night Shift. There were several and this is the only one I’ve tried because it has worked very well. You can find it here. https://chrome.google.com/webstore/detail/night-shift/fpnlpehjhijpamloppfjljenemeokfio

It doesn’t change the desktop, theme, or terminal, but you can adjust those settings relative to each application.

deploying a laravel app on Ubuntu 18.04

$ sudo su
$ apt install php mysql-server apache2 composer git npm vim
$ cd /var/www
$ git clone https://your.repo
$ cd /var/www/your_repo
$ composer install

composer usually throws an error here because you have to install php dependencies. Scroll to the top of the error message. It may be something like, Laravel requires ext-dom or Laravel requires ext-curl.

Whenever you see ext, replace it with php, and then run:

$ apt install php-dom php-curl

After you install dependencies and composer finishes installing the php portion of your app, do this:

$ vim .env.example

in the .env.example

do this:

:sav .env

Then edit the following environment variables.


if you are running prod, make sure you do this:


additional dependency that must be installed

$ apt install php-mysql

then do this

$ php artisan key:generate
$ php artisan migrate
$ npm install
$ npm run dev

Oh yeah, very important. 90% of problems with Laravel deployment out of the box can be solved with this. Do this last.

$ chown -R www-data:www-data /path/to/app

and remember to exit root.

Why I chose AWS over Digital Ocean

So Digital Ocean is awesome.   They bring a lot to the table that Amazon does not.   For example, with a Digital Droplet, you have option of choosing Fedora, CentOS, Ubuntu, and more.  For RPM distros, AWS only has Amazon AMI (not my favorite) and Redhat which is incredibly expensive.   With Digital Ocean, you also have the option of logging in to your droplet through a console provided by Digital Ocean that is located in your admin panel, very handy if you lose your keys.  With Amazon, recovery is complex to nth degree.

Digital Ocean also has predictable prices, which is awesome.   You can watch the price accrue in the upper right hand corner of your admin account.   With AWS, you have to log into your billing section or you may be surprised by an extreme uptick in usage.

So why did I choose AWS.

First off, even though a more established Open Source RPM distro is going to beat Amazon AMI hands down any day of the week, I’m not doing mission critical tasks and I can do Ubuntu which is the OS of choice for most Webmasters.  That said, Fedora and CentOS are going to beat Amazon any day of the week, simply because they are open source and they have a vibrant community of contributors.   Amazon AMI doesn’t even ship with SELinux enabled.  You have to configure it.

What does Amazon have that Digital Ocean doesn’t.   First off, the Free Tier.   One awesome year of Amazon freebies.  It by far outstrips any goodie bag from any of the competitors.    Second off, Amazon is fast.   Storage is cheap.   And the firewall, my friends, is secure.    And last I checked, Amazon had Redhat VMs, whereas Digital Ocean did not.  If I was doing mission critical tasks, Redhat would be my weapon of choice.

I recommend Digital Ocean for someone who is new to do it yourself webhosting.   Digital Ocean has the best documentation on the Web and their products are easy to use.   They’ve managed to make something with a moderately steep learning curve, intuitive and even pleasant.   But nothing matches the raw power of an AWS account.

using bash to extract certain columns from a CSV

So here’s a handy dataset: https://npiregistry.cms.hhs.gov

It contains a registry of medical providers (clinics, physicians, etc) and their National Provider ID called an NPI.  Really useful if you need to search for that information or if you need to add it to your datasets, for example, if you have the NPI but you don’t have the first name or last name, or vice versa.

Here’s the problem.  This freaking dataset is HUGE!  The CSV file that I need to match against my tables had over 100 columns, with something like 4 million rows.  That is 400,000,000 cells.  My ‘puter coughed and sputtered.  Here comes bash to save the day.

Check out this post a stackoverflow.

User702403 had a question, but for me, the question was the answer.

awk -F "," '{print $1 "," $2}' infile.csv > outfile.csv

To a bash noobie such as myself, this is gold. And it just about says it all. $1 will be the first column. $2 will be the next. Use for example awk -F "," '{print $1 "," $6 "," $7}' infile.csv > outfile.csv if you want print the first, 6th, and 7th columns. Fortunately, all of my needed columns were in the front but if you need to extract the 3006th column and you didn’t know which one it was, you could copy the first row into a spreadsheet and then pivot.

How a typo can break Laravel, and how bash can fix it

First off, thank you Nathan Long at StackExchange for saving my app.  Here’s the link to his sage answer:


This is the typo that broke my code:

php artisan app:name foo.bar

Here foo.bar is just pseudocode for a very specific label that I was applying to the app namespace.  However, I carelessly added the top level domain of the Website the app was on.  Namespace don’t like punctuation.  So artisan recursively renamed every occurrence of
use App\
namespace App
use foo.bar
etc.  Bash to the rescue.

To change it back, I did the following from CLI, slightly modifying the StackExchange code:

find . -type f -name "*.*" -print0 | xargs -0 sed -i '' -e 's/foo.bar/App/g'

I was able to do this because foo.bar in this case was a very unique label. If I chose a more generic term, this could unleash utter destruction. Back your work up, folks!

get table size in megabytes, MySQL

Here’s an awesome snippet of code that I got from Thomas Eisenbarth at https://makandracards.com/makandra/39413-mysql-mariadb-show-disk-usage-of-tables-and-columns

SELECT table_name AS `Table`, round(((data_length + index_length) / 1024 / 1024), 2) `Size (MB)` FROM information_schema.TABLES WHERE table_schema = "$your_database";

Basically, it will display all the tables in a database with disk usage in the next column. All you to do is change $your_database to the actual name of a database on your MySQL server.

Be sure to thank Thomas Eisenbarth by when you click on his link.

an old JSON typeahead project


It is my belief that database nuts should get familiar with noSQL.  We live in a world where noSQL has become very prominent.  Get familiar with JSON.  Get familiar with Mongo and Cassandra.  These tools are very useful in some ways, and you CAN do things with them that you cannot do with SQL.

The link above is my initial foray into programming with JSON.  Please forgive the mess.  It consists of a Centers for Disease Control dataset and an autocomplete library (I want to say that it was bootstrap3/jquery but it’s been a long time since I’ve worked on this).  You can find the source code here.

Raw PHP vs. frameworks

Everything PHP is object-oriented these days, and thank goodness for that. Who could imagine life without Laravel? However, the question should be asked, is Web development in raw PHP still relevant? If so, what is raw PHP good for, compared to frameworks such as Symfony or Laravel?

Consider the following link:


This is a link to a Website that is not associated with techBallet.net. They ran benchmark tests for several Web framework configurations, including Laravel, Symfony, Rails, Sailsjs, Django, and yes, raw PHP. Thank you, techempower for that!

Take a look at how they did.

On the fortune cookie test, raw PHP scored appr. 23% out of 100%. Not stellar performance compared to asp.net or even more pure OOP solutions. However, it clearly left the frameworks above in the dust, all of which scored below 3%.

Perhaps this is because fewer “moving parts” leads to greater efficiency and faster performance? We have given raw PHP an electric motor, which has limitations, but is less clunky compared to the internal combustion of the more popular frameworks (apart from asp.net, for some reason).

Best practices for Laravel Database Migrations

If you have a solid foundation in SQL and creating relational database models, you don’t need this article.   That said, having worked professionally with Laravel migrations and SQL in general here and there, I see several common areas that are overlooked by Laravel developers from time to time.

Best practices for SQL are best practices for Eloquent

Just because it works, doesn’t mean it’s optimal.  A poorly designed relational database will slow down your Website.  Use SQL best practices.  Read about it.  This is a good place to start:  https://www.upwork.com/hiring/data/tips-and-best-practices-for-sql-programmers/

Normalize your tables

Each field should be dependent on 1 unique primary key, when possible.  That’s it in a nutshell, but normalization involves quit a bit more.  Read about it.  https://www.essentialsql.com/get-ready-to-learn-sql-database-normalization-explained-in-simple-english/

Define your tables with exactness.  Do not do this:


Doing this will create a default field length of 255 characters, which will not only compete with the rest of your table if you are creating a lot of columns, but it will also consume more resources when your SQL driver scans the index.  If your index has at max 6 characters, do this:

 $table->string('my_alphanum_index', 6);

Use foreign keys where appropriate.

You have to create an index on the parent and child key, and create a foreign key in the child table. This is done is the database/migrations folder. Here is an example:

public function up()
Schema::create('my_items', function (Blueprint $table) {

    $table->string('vendor_id', 11);
    $table->string('manufacturer_id', 11);