Anders Tornblad

All about the code

Reinventing a PHP MVC framework, part 1

Let's reinvent the wheel

This is the first part of a series of articles about the mt-mvc PHP MVC framework.

I wanted to know how ASP.NET MVC does what it does, so I decided to find out... by trying to reinvent it... in PHP. My line of thought was this:

  • I know how to USE the ASP.NET MVC framework
  • I know the effects of using the various features of the ASP.NET MVC framework
  • I know the principles of TDD
  • I should be able to reinvent (or reverse-engineer) a working MVC framework by adding unit tests for increasingly complex use of MVC, and making one or a few tests pass at a time
  • I also want to become a better PHP developer

I am perfectly aware of the fact that there are lots of MVC frameworks for PHP that are really capable of taking care of business, but this is not a website development effort. This is a learning effort. Reinventing the wheel works fine for learning - not for production code.

MVC the ASP.NET way

Let's start with something simple. The most basic use of ASP.NET MVC, in the default setting, appears to work by separating the request path of an incoming request into a Controller class name, a View method name, and an optional parameter value that gets passed into the method. Also, there are default values for all parts of the path.

First set of tests

I imagine a class that's solely responsible for parsing a path, and suggesting the name of a controller class, and a method to call, so I write some tests for that class first. Hooking things up to the PHP HTTP infrastructure gets added later.

class RoutingTests { public function CheckAllDefaults() { $routing = new Routing(); $route = $routing->handle(''); The($route->controllerClassName)->shouldEqual('HomeController'); The($route->methodName)->shouldEqual('Index'); The($route->parameter)->shouldNotBeSet(); } public function CheckDefaultMethodNameAndParameter() { $routing = new Routing(); $route = $routing->handle('Articles'); The($route->controllerClassName)->shouldEqual('ArticlesController'); The($route->methodName)->shouldEqual('Index'); The($route->parameter)->shouldNotBeSet(); } public function CheckDefaultParameter() { $routing = new Routing(); $route = $routing->handle('Categories/List'); The($route->controllerClassName)->shouldEqual('CategoriesController'); The($route->methodName)->shouldEqual('List'); The($route->parameter)->shouldNotBeSet(); } public function CheckNoDefaults() { $routing = new Routing(); $route = $routing->handle('Products/Item/123x'); The($route->controllerClassName)->shouldEqual('ProductsController'); The($route->methodName)->shouldEqual('Item'); The($route->parameter)->shouldEqual('123x'); } }

These tests are about the default out-of-the-box behavior of the routing subsystem. More advanced features, like registering custom url patterns, get added later.

class Routing { public function handle($url) { $parts = explode('/', $url); $controllerName = @$parts[0]; $methodName = @$parts[1]; $parameter = @$parts[2]; if (!$controllerName) $controllerName = 'Home'; if (!$methodName) $methodName = 'Index'; return (object) [ 'controllerClassName' => $controllerName . 'Controller', 'methodName' => $methodName, 'parameter' => $parameter ]; } }

Usefulness right now

This class does the bare minimum, and making some real use of it requires a lot of nuts and bolts in place – some URL redirection, a request/response pipeline system, some use of reflection to dynamically create controller instances and calling methods, a lot of thought about how to connecting views to the controller methods, and so on. Don't worry; all of that will be covered in the following posts.

All parts

Reinventing a PHP MVC framework, part 1 (this part)
Reinventing a PHP MVC framework, part 2
Reinventing a PHP MVC framework, part 3
Reinventing a PHP MVC framework, part 4

You'll find the code from this article in the related release on GitHub. The latest version is always available in the GitHub repository.

About to solve an old THREE.js bug and move on with Artsy

I really need to pay more attention. Almomst a year ago, THREE.js released the r67 version, which removed the concept of centroids. This made part 3 of Artsy break. I used centroids and the Mesh.calculateCentroid function, not because I needed to, but because some tutorial told me I should.

When the concept of centroids was removed, in April 2014, my JavaScript demos was very low on my list of priorities, but soon I will make time for fixing and advancing. Who knows, I might even be able to finish Artsy once and for all. I started working on it in October of 2013, so it's really overdue!

For now, I have removed the calls to calculateCentroid and done some small changes to at least get Part 3 to start. Stay posted!

First version of GitHub Webhook handler public on GitHub

Yesterday, I wrote about my efforts for creating an easy-to-use GitHub Webhooks handler in PHP, suitable for shared hosting environments.

After a few hours of making the code a little prettier, it is now public on GitHub. I remade the whole thing into an API style that I would enjoy using. Now, you can hook yourself up to GitHub Webhooks like this:

<?php require_once "mt-github-webhook.php"; // Changes in the QA branch are pushed to the secret password-protected QA web site \MT\GitHub\Webhook::onPushToBranch("qa-testing")-> forChangesInFolder("main-web-site/public_html")-> setGitHubCredentials("github-username", "My5ecretP@ssw0rd")-> pushChangesToFolder("/www/sites/qa.domain.com/public_html"); // Changes in the PRODUCTION branch are pushed to the public-facing web site \MT\GitHub\Webhook::onPushToBranch("production")-> forChangesInFolder("main-web-site/public_html")-> setGitHubCredentials("github-username", "My5ecretP@ssw0rd")-> pushChangesToFolder("/www/sites/www.domain.com/public_html"); ?>

The clone url is: https://github.com/lbrtw/mt-github-webhook.git. Feel free to fork and play around with it.

Automatic deployment on shared server using GitHub webhooks

If you, like me, have a few spare time projects, chances are you don't own or rent a dedicated server for your web hosting. I use Loopia (a Swedish web hosting provider) for my hosting purposes. I use their web hotel service, so I have very little control over file system paths, php modules and such.

On a dedicated server, using GitHub webhooks is pretty straightforward. When your server gets notified of a push or a closed merge request, you can do a simple git clone to create a fresh full copy of the branch you are using for your deploys. On a shared system, without access to the git command-line tools, it gets a little tricker.

I have developed a php-based solution that works for me. My branch and merge setup looks something like this:

  • master : This is the Main development branch
  • dev : This is the Online testing branch
  • vnext : This branch is Used for demonstration purposes, and possibly pilots
  • www : This is the Current stable running version
  • Changes pushed often from master to dev
    • 26 Jan at 16:29: Bug fix
    • 27 Jan at 19:11: New feature
    • 29 Jan at 09:53: Experimenting
    • 30 Jan at 13:49: Bug fix
    • 01 Feb at 11:52: Bug fix
    • 02 Feb at 13:20: Experimenting
    • 03 Feb at 08:41: New feature
    • 04 Feb at 16:17: Bug fix
    • 06 Feb at 11:53: Bug fix
    • 07 Feb at 08:28: New feature
    • 07 Feb at 18:16: Experimenting
    • 09 Feb at 17:32: New feature
    • 10 Feb at 12:53: Bug fix
    • 12 Feb at 12:10: Experimenting
    • 13 Feb at 11:11: New feature
  • Version candidate pushed weekly from dev to vnext
    • 30 Jan at 17:29: Customer demo
    • 05 Feb at 12:52: Internal release
    • 11 Feb at 08:14: Customer demo
  • New version pushed to production when done from vnext to www
    • 07 Feb at 15:49: Live deployment

All development is performed in the master branch. Whenever a feature makes enough progress to be visible or usable (or is completed), or a bug is fixed, I merge to the dev branch. Every now and then, I'm not the only coder making changes. When other coders are done with a feature or a bug-fix, they create a pull request that I approve to perform the merge.

The dev branch is where we test everything internally. We can do experiments, move stuff around, temporarily remove features or add wild and crazy stuff. When the dev branch is good enough for showing to people, we merge to the vnext branch, which is always a little more stable and feels more "done". This is where customers can check out future features and have their say in stuff.

After a couple of rounds of pushing to vnext, it's time to go live. This is done by merging to the www branch.

Continuous Integration and Deployment

Every time something gets pushed into the non-master branches, GitHub posts a message to my webhook handler. The handler reads the message payload to find out what files are changes and what branch is the target. Using this information, it downloads the correct source files from raw.githubusercontent.com and copies to the correct directory of the shared web server file system.

// We are only interested in PUSH events for now $eventName = @$_SERVER['HTTP_X_GITHUB_EVENT']; if ($eventName != 'push') { http_response_code(412); exit("This is not a PUSH event. Aborting..."); } // Read and parse the payload $jsonencodedInput = file_get_contents("php:\/\/input"); $inputData = json_decode($jsonencodedInput); // What branch is this? $branchRef = $inputData->ref; // If I'm interested in the branch, copy all changes, otherwise quit if ($branchRef == 'refs/heads/dev') { copyChanges('/WEB-HOTEL-ROOT/dev.domainname.com/', 'dev', $inputData); } else if ($branchRef == 'refs/heads/vnext') { copyChanges('/WEB-HOTEL-ROOT/vnext.domainname.com/', 'vnext', $inputData); } else if ($branchRef == 'refs/heads/www') { copyChanges('/WEB-HOTEL-ROOT/domainname.com/', 'www', $inputData); } else { http_response_code(412); exit("I'm not interested in the $branchRef branch. Aborting..."); }

The code above is simple enough. Depending on the type of event, and on the name of the branch, the script either exits immediately with a nice error message (that you can read in your GitHub repository's webhook settings page), or calls the copyChanges function, shown below.

function copyChanges($rootFolder, $branchName, $inputData) { // Check all commits involved in this push for changes that I'm interested in $interestingChanges = extractInterestingChangesFromCommits($inputData->commits); $changedPaths = array_keys($interestingChanges); // No interesting changes? Quit! if (count($changedPaths) == 0) { exit("No interesting changes. Goodbye!"); } foreach ($changedPaths as $localPath) { $fullPath = $rootFolder . $localPath; $changeType = $interestingChanges[$localPath]; if ($changeType == 'delete') { // Deleted file - delete it! unlink($fullPath); } else { // Added or modified file - download it! $url = "https://USERNAME:PASSWORD@raw.githubusercontent.com/USERNAME/REPOSITORY/$branchName/$localPath"; $fileContents = file_get_contents($url); if ($fileContents !== false) { file_put_contents($fullPath, $fileContents); } } } }

Actually, the code I use contains some more error checking. It also recursively creates new directories if a file wants to be put in a directory that does not yet exist.

function extractInterestingChangesFromCommits($commits) { // This function returns an array where // the keys are local file paths, and // the values are the type of change // Something like this: // [ // 'path/file.1' => 'add', // 'path/file.2' => 'change', // 'path/file.3' => 'delete' // ] $result = []; foreach ($commits as $commit) { foreach ($commit->added as $added) { $result[$added] = 'add'; } foreach ($commit->modified as $modified) { $result[$modified] = 'change'; } foreach ($commit->deleted as $deleted) { $result[$deleted] = 'delete'; } } return $result; }

That's about it for now. The script has been running and handling deployments for my spare-time projects for a while now, and I feel confident about it. I'll make some more touchups to this script, and then I'll put it on GitHub for you to star. Check in for a link in a few days.

Ain't nobody got time for Wordpress themes written from scratch

This blog has been on life-support for a while now. I have been busy getting married, focusing on my day-job, enjoying life in different ways, and sometimes life is just too full.

Today I removed my old Wordpress theme that I wrote from scratch and switched to Twenty Fifteen, with just one addition – my custom code formatter that takes care of making HTML, CSS, JS, C# and PHP looking nice.

Ain't nobody got time for maintaining Wordpress themes written from scratch! But actually, I got time for blogging again. So I will. I promise...

Tajmkiper now public

Today I launch Tajmkiper as a publicly available, free-to-use, utility. You are welcome to use it as much as you want. If you like it, please tell your friends and colleagues about it.

It is designed mobile-first, so it really looks its best on a smaller screen, but works fine in any modern browser.

Try it out: Tajmkiper.com

Read more about it:
Tajmkiper, part 1
Tajmkiper, part 2
Tajmkiper, part 3

Tajmkiper, part 3

Thanks to my earlier efforts (part 1, part 2 and part 3), exporting all projects to a CSV file that is saved locally is really easy. That is also the only part of the tajmkiper.com utility that I'll blog about. The complete code will be available on Github when the utility is launched for public use, so you are free to check out the complete code.

function exportToCsv() { // Order of properties var propertyOrder = ["name", "time"]; // Create CSV exporter var csv = new Csv(propertyOrder); // Add header line csv.add({ "name" : "Project", "time" : "Total time" }); // Add all projects, using the same (omitted) formatTime function from before allProjects.forEach(function(project) { csv.add({ "name" : project.name, "time" : formatTime(project.getTotalTime()) }); }); // TODO: Create a formatTodaysDate function var filename = "tajmkiper-" + formatTodaysDate() + ".csv"; csv.saveAs(filename); }

Future possibilities

There is actually something that I would like to do on the server-side, that doesn't really take anything away from the beauty (?) of a purely client-side codebase. I would like to be able to transfer my projects from one browser to another. One possibility would be to simple take the JSON representation saved locally, and expose it for copy/paste, but that's not really smooth enough. I want to be able to save my projects to a server, and then load them up on another browser.

One use case for this is when I use the utility on a mobile device, but my company's time report utility uses Excel. Then I would like to open the projects on my desktop computer, export to CSV, which I then import in Excel, and keep working there.

EDIT: Tajmkiper is now publicly available to anyone who wants to use it.
Try it: tajmkiper.com

Tajmkiper, part 1
Tajmkiper, part 2
Tajmkiper, part 3 (this part)

Tajmkiper, part 2

One important feature that I want for Tajmkiper ("time keeper" written phonetically in Swedish) is that everything should run locally, including data storage. The data model is really simple: I'll write a class called Project with only three properties: name, totalFinishedTime and startedOn

Project.name
The name of the project.
Project.totalFinishedTime
Number of seconds the project has been running in completed chunks. If this project's time clock is currently running, the totalFinishedTime does not include the number of seconds on the running clock.
Project.startedOn
The Unix timestamp (Date.getTime()) for when this project's time clock started, or null if the time clock is not started.

This way the project objects don't need to be constantly update every second. To present the currently elapsed time on the clock, the user interface simply adds totalFinishedTime to the number of seconds that passed since startedOn. Assuming most people don't fiddle around with the system clock, this will also make it perfectly possible to close the browser and reopen it any amount of time later, and the time clock will remember when it was started.

function Project(name, totalFinishedTime, startedOn) { this.name = name; this.totalFinishedTime = totalFinishedTime; this.startedOn = startedOn; } Project.prototype.getTotalTime = function() { if (this.startedOn) { // Bitwise or with zero forces the result to be an integer return this.totalFinishedTime + (Date.now() - this.startedOn) / 1000 | 0; } else { return this.totalFinishedTime; } }; Project.prototype.stop = function() { this.totalFinishedTime = this.getTotalTime(); this.startedOn = null; }; Project.prototype.start = function() { this.totalFinishedTime = this.getTotalTime(); this.startedOn = Date.now(); }; var allProjects = []; function saveProjects() { var json = JSON.stringify(allProjects); localStorage["projects"] = json; } function loadProjects() { allProjects = []; var json = localStorage["projects"]; if (json) { var temp = JSON.parse(json); temp.forEach(function(item) { allProjects.push(new Projects(item.name, item.totalFinishedTime, item.startedOn)); }); } }

The next step is to connect the Project objects up to some HTML generation. I could do this using a jQuery templating plugin, but i choose to do it myself, just for the hell of it.

function createProjectsHtml() { // First, clear any existing content of the #projects UL element var ul = document.getElementById("projects"); ul.innerHTML = ""; // Go through all projects, create html elements for each, and add them to the UL allProjects.forEach(function(project) { var li = createProjectElement(project); ul.appendChild(li); }); // TODO: Add the #total LI element } function createProjectElement(project) { // Create an LI element, and store a reference to it in the Project object for later use var li = document.createElement("LI"); project.element = li; // Store a reference to the Project object in the clickable link var a = document.createElement("A"); a.href = "#"; a.projectReference = project; li.appendChild(a); a.addEventListener("click", onProjectClick, false); var header = document.createElement("HEADER"); header.textContent = project.name; a.appendChild(header); var timeSpan = document.createElement("SPAN"); timeSpan.className = "time"; a.appendChild(timeSpan); updateProjectElement(project); return li; } function updateProjectElement(project) { var li = project.element; var timeSpan = li.querySelector("span.time"); var totalTime = project.getTotalTime(); var timeText = formatTime(totalTime); timeSpan.textContent = timeText; li.className = (project.startedOn) ? "running" : ""; } // TODO: Write a formatTime function var running = null; function onProjectClick(e) { e.preventDefault(); // Stop the currently running project first if (running) { running.stop(); updateProjectElement(project); } // Start the clicked project var project = this.projectReference; project.start(); updateProjectElement(project); running = project; saveProjects(); }

Now, the projects are clickable, and the html gets updated at each click. What is still missing is a timer function to repeatedly update the running project's time display. Also, now that a reference to the LI element is stored in the Project object, we need to modify the saveProjects function, so that is doesn't try to store any HTML elements.

function saveProjects() { // Filter out properties to only include array indices, and those properties that need storing var json = JSON.stringify(array, function(key, value) { if (key == "name" || key == "totalFinishedTime" || key == "startedOn" || key >= 0) { // Only certain properties of Project are saved return value; } return undefined; }); localStorage["projects"] = json; } function timerFunc() { if (running) { updateProjectElement(running); } // Wait for the next second on the clock var now = Date.now(); var millis = now % 1000; var wait = 1000 - millis; window.setTimeout(timerFunc, wait); } document.addEventListener("DOMContentLoaded", function() { loadProjects(); createProjectsHtml(); timerFunc(); }, false);

Next step

There you have it. What is still missing?

  • Export to CSV
  • Working stop button
  • Clear all timers
  • Remove all projects

EDIT: Tajmkiper is now publicly available to anyone who wants to use it.
Try it: tajmkiper.com

Tajmkiper, part 1
Tajmkiper, part 2 (this part)
Tajmkiper, part 3