Josh Pollock All the Word that's fit to Press Wed, 23 Dec 2020 19:34:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 Developer Automation With NPX and Commander https://torquemag.io/2019/02/developer-automation-with-npx-and-commander/ Wed, 20 Feb 2019 16:00:14 +0000 https://torquemag.io/?p=86480 The more I learn about JavaScript development, the more I’m exposed to JavaScript build and config tools. The tools such as babel and webpack, frustrate me. The config tools, the magic software of joy that sets up all of your build tools for you via magic pixie-dust or something so you can just write code, those fascinate me. I wish we had more for WordPress and ones that did things exactly the way I think they should be done. I really dislike using templating languages for templating code. That’s what’s stopped me from building my own scaffolding tools before. I always […]

The post Developer Automation With NPX and Commander appeared first on Torque.

]]>
The more I learn about JavaScript development, the more I’m exposed to JavaScript build and config tools. The tools such as babel and webpack, frustrate me. The config tools, the magic software of joy that sets up all of your build tools for you via magic pixie-dust or something so you can just write code, those fascinate me.

I wish we had more for WordPress and ones that did things exactly the way I think they should be done. I really dislike using templating languages for templating code. That’s what’s stopped me from building my own scaffolding tools before. I always build a plugin that did things the way I wanted my template to do things, then copied the plugin into the templates directory of my generated and then manually added substitution strings. Debugging was a pain.

My new goal is to keep a plugin that has a reference case for each thing I would want a plugin to do — composer.json, Gutenberg blocks, PHP direct injection container, etc. — and a scaffolding tool that can copy from that plugin.

I decided to write it in Node so that I could easily share it via npm, and be able to run it locally, in a Docker container or even in a Serverless app. It was a good excuse to learn more about Node’s file system module and npx.

Getting Comfortable With NPX

NPX is a tool that lets you run Node packages from anywhere without installing them first. It’s perfect for application scaffolding tools that you only occasionally use. Instead of keeping a copy installed locally, you run the version on npm. For example, to create a react app, you could install create-react-app globally on your computer, or you can use NPX create-react-app to accomplish the same thing and never worry about keeping it up to date or eating up storage space.

If you’re not familiar with NPX, which is included with npm, I recommend reading the introductory post about NPX.

In the introductory post, Kat shows how to execute JavaScript stored in a Github gist. As she notes, this is remote code execution, which is potentially highly unsafe. So step one for running this script is, read the source of the code you’re about to execute, step two is decide if you want to execute it and step three is to cause remote code execution.

Let’s do that. First, go to https://gist.github.com/zkat/4bc19503fe9e9309e2bfaa2c58074d32 and look at the script. Last time I looked it just caused a console message to be logged. Make sure it hasn’t changed. If so it hasn’t:

View the code on Gist.

That will cause that message to log.

Running the package with a local path. So this is cool, we can develop a script stored as a gist for our automation. But the experience of developing is going to be bad. If we have to push and pull code from the gist, then test, that will be slow. I’d rather test the code locally.

Yes, I could run the code directly with node, ie. switch to the local directory and run node index.js or whatever. What I really want is to use the development version in context. What if I could use npx ~/npx-is-cool from anywhere on my computer to use the local development version of npx-is-cool?

Turns out npx can use a local file path, so that is possible. Let’s walk through how it will show you some basics of how npx works.

First, download a copy of that gist. Put it in your main user directory and change the name of the directory to “npx-is-cool”. Then open up the index.js and you will see:

View the code on Gist.

This is a pretty basic script, change it’s output to something more fun like “I am the Batman” so you can prove your local version is being used and because you are the Batman.

View the code on Gist.

Now you can run this command from anywhere on your computer to use your own npx-is-cool

View the code on Gist.

Now we have a basic model for developing the tool. Develop the npx package locally, using and debugging it in context. That last part is important, I’m making a development tool, I want to use it on the tools I am developing to see if it works or not and once it does so I have its capabilities available to me right away once they work.

Yes, I also want to push my changes to remote version control and make the package available via npm. But, when I’m developing, I do not want to think about managing releases.

Enter The Commander

Command that outputs just “Hi Roy”

View the code on Gist.

Developer Automation With Commander

Using Node To Work With Local And Remote Files

For something practical, I created a command that copies the two files that copies the two PHP files I use to load assets for a React app in a WordPress plugin and change there namespace.

To accomplish this we will need four things:

  1. A function to download a file from a remote URL to the local.
  2. A function that uses the first function and then changes the namespace in the resulting file.
  3. A function that calls those functions will the right argument
  4. A command to call that function.

Let’s go through the list. I do not want to write a full tutorial on the node file system, but here’s your crash course.

In Node, we use the filesystem — read and write files for example — using “fs”. For example, to check if a file exists:

var fs = require('fs');
fs.existsSync(__dirname + '/hiRoy.txt' );

 

The fs module is pretty readable. This function is called “existsSync”. It checks — synchronously — if a file exists. We can write a file with fs, synchronously using fs.writeSync().

If you’re not used to require(). It brings the value of the module.exports from the file you specify into scope. For example, if you have the file “foo.js” and inside you have module.exports = function foo(){}; Then when you use const foo = require( ‘./foo’ ); from a file in the same directory that function is now in stored in the const foo. We can leave off the file path — require( ‘react’ ) — to access a module in the node_modules directory, in this case, the export of node_modules/react/index.js

Here is a module to handle downloading a file via https — using the https module — and writing it to file using the fs module:

View the code on Gist.

Notice that on the last line — module.exports = download. The download is a reference to the function, so when we require this file, that function is usable. That’s how we use it in our next module.

Here is the module to download a file and change its PHP namespace.

const download = require( './download' );
const replace = require( 'replace-in-files' );
const path = require( 'path' );

/**
 * Download a file and change its namespace
 *
 *
 * @param {string} file Remote file to copy
 * @param {string } destPath Path to write file to
 * @param {string} nameSpace Namespace to use for new file
 */
function downloadPhpAndNameSpace(file,destPath,nameSpace){
	download(file,destPath, () => {
		replace({
			from: "/calderawp\\WordPressPlugin/g",
			to: nameSpace,
			files: [destPath]
		});
	});
}
module.exports = downloadPhpAndNameSpace;

If you’re still getting used to required() look at the difference between the required for download and for replace-in-files. The first starts with a file path ‘./’, so node looks for the file. The other does not, so it looks in node_modules.

None of these functions so work with the specific files we want to work with. They work with any files, which is good, these are common needs. But let’s start solving the problem at hand. We need to download two files and re-namespace them. With these two functions, we just need one more to set up the paths of where the files go.

Creating The Command

Now it’s time to wrap all of this up in a command so we can type npx caldera-former client-assets ~/my-plugin Vendor/Package and get our scripts copied to ~/my-plugin with the namespace Vendor/Package.

This is a sub-command of our package. I like this pattern vs having just one command with lots of options. It will make it easier to add additional commands and options later.

View the code on Gist.

In the command() function, that words in square brackets become variables passed to the function “action”. The function action is bound to a simple callback that calls the function created in the last step.

Now You Take Command

I’m enjoying learning more about the parts of JavaScript that I do not normally get to work with due to only using JavaScript for front-end dev. Learning how Node modules are structured helped me understand more about what webpack is abstracting. I also learned more about the file system.

More importantly, I’ve been working on my npx app that should save me and my team time. If you think a tool like this could save your team time, fork it and make your own. At me on Twitter @josh412 or leave a comment with what you create.

The post Developer Automation With NPX and Commander appeared first on Torque.

]]>
Using Express To Build A Node.js Server To Proxy The WordPress REST API https://torquemag.io/2019/02/using-express-to-build-a-node-js-server-to-proxy-the-wordpress-rest-api/ Wed, 13 Feb 2019 16:00:29 +0000 https://torquemag.io/?p=86452 Currently, the WordPress front end is powered by JavaScript. That’s clear. But what about the server-side? We’ve always used PHP, but the more I learn JavaScript development and the more I use it, the more the switching back and forth with PHP hurts. Also, JavaScript is so much easier to deploy than PHP at this point. Serverless JavaScript apps are one click now, and serverless PHP is something I read about in Medium posts too complex for me to ever reproduce. Once you start writing front-end JavaScript with build tools like Babel and webpack, you’re pretty close to server-side JavaScript. […]

The post Using Express To Build A Node.js Server To Proxy The WordPress REST API appeared first on Torque.

]]>
Currently, the WordPress front end is powered by JavaScript. That’s clear. But what about the server-side? We’ve always used PHP, but the more I learn JavaScript development and the more I use it, the more the switching back and forth with PHP hurts. Also, JavaScript is so much easier to deploy than PHP at this point. Serverless JavaScript apps are one click now, and serverless PHP is something I read about in Medium posts too complex for me to ever reproduce.

Once you start writing front-end JavaScript with build tools like Babel and webpack, you’re pretty close to server-side JavaScript. I find that the more I learn server-side JavaScript — JavaScript executed in the node runtime — or to be less pedantic “node”, the more I understand the scoping and imports and exports that make working with webpack in the front-end tricky.

For a fun WordPress-related exercise, let’s build a JavaScript server that pretends to be the WordPress REST API. It will not be the WordPress REST API but we’ll make it act like one for a few routes.

There are a few reasons why you might want such a server. One is to create a fake server for integration testing of a JavaScript-powered WordPress application. That way you would not need to have a WordPress environment, ie PHP and MySQL. You also could use this server to proxy a real site and create a static cache of REST API responses. The actual server would not be accessible to the public.

The code in this article makes use of async/await. This is a relatively new JavaScript API. If you’re not familiar with it, you only need to know two rules, which I covered in more depth in this post. The short version is asynchronous functions are declared with the `async` keyword before them and inside of async functions we can use the `await` keyword to wait for the resolution of a promise to continue to the next line.

Express Start

We’re going to be writing a very simple Express app in this post. Express is a JavaScript routing system that is very popular for server-side JavaScript applications. This is not a tutorial on how to use Express. I’m not an expert in Express. Everything about Express we’re going to use is covered in this one page of their documentation.

Create a new directory, initialize a new npm project and install express;

View the code on Gist.

Now, create an index.js in a directory called source. Here is a very basic, one route app:

View the code on Gist.

We have added one route, it works with GET requests — we used the method get(), not post() — and it responses to routes with that match the path “/”. Inside the route handler, we get a request and a response object. The first request that will come in handy shortly and the other we use to create a response.

Two important things to know about the response. First, you can set the HTTP status code for the response with the status() method:

View the code on Gist.

Second, you can send a JSON response by returning the json() object.

View the code on Gist.

Serving JSON From Files

That’s cool, but let’s show some WordPress content with this app.

Dynamic Routes With Express

One more Express concept to learn — dynamic routes. Let’s say we wanted our server to respond to a request to /posts/hello-world with the json for the post with the slug hello-world, we would need to know, in our route handler callback that post slug — “hello-world”. Also, we’d need that route to exist — for any post slug. That’s a dynamic route.

Inside of our route callback the request object will have the property “params” that has all of the matched parameters from the request.

View the code on Gist.

That shows us that we can get the post slug from the URL. For now, I’m going to assume that your project has a directory called content/wp-json/posts with JSON files name for WordPress posts, by slug. If that seems like an oddly specific thing to have on hand or are curious how to do that besides cutting pasting responses from your browser, take a look at my post on how to do that every time a post is saved.

This updated handler uses that dynamic route parameter to build that file path and then returns the JSON.

View the code on Gist.

That is if it exists, we do not have any handling for that, which is sub-optimal. Let’s check first if the file exists, and if not, return a 404:

View the code on Gist.

Proxying Remote Content

That’s basically all we need if you are OK supplying all of your content via JSON files built using some other process. But what if this could also create the files via a REST API request to a real WordPress site and then cache the results for next time? Sounds cool. Let’s do it, first we’ll need an API client:

View the code on Gist.

This is the Node client for WordPress REST API. We will use it to get the post from a remote site. The client uses API discovery to auto-configure all of its routes:

View the code on Gist.

Now, we can use this client to query for the post and write the response to the file system to prevent another request from being made later:

View the code on Gist.

What Else Would You Do With This Server?

That’s enough to show how to create routes and proxy the WordPress REST API. And that’s enough to make you dangerous — I mean useful. For example, what about using this server server-side rendering of React components that you may also be using on the front-end:

View the code on Gist.

I’m not going to get into server-side rendering for React in this post. I recommend reading this post to learn more.

You can take a look at the Github repo of the project this is based on to follow along with what I am doing. Feel free to copy or fork that repo. Leave a link to what you create in the comments or come at me on Twitter – @josh412 – with the link.

The post Using Express To Build A Node.js Server To Proxy The WordPress REST API appeared first on Torque.

]]>
Using the WP Queue to Copy REST API Data to Files https://torquemag.io/2019/01/using-the-wp-queue-to-copy-rest-api-data-to-files/ https://torquemag.io/2019/01/using-the-wp-queue-to-copy-rest-api-data-to-files/#comments Tue, 29 Jan 2019 22:25:00 +0000 https://torquemag.io/?p=86395 Recently Matt Shaw from Delicious Brains published a post about a new library they created to help in one of their products. This library WP_Queue provides a Laravel-like Job management system for WordPress. A job queue is a system that allows you to schedule jobs to run in the future. We tend to use jobs for two reasons. First, we may need to wait awhile, like if we want to schedule a follow-up email in a week. The other reason is performance. Maybe we need to do something computationally expensive and don’t want the user to wait. A job manager […]

The post Using the WP Queue to Copy REST API Data to Files appeared first on Torque.

]]>
Recently Matt Shaw from Delicious Brains published a post about a new library they created to help in one of their products. This library WP_Queue provides a Laravel-like Job management system for WordPress. A job queue is a system that allows you to schedule jobs to run in the future. We tend to use jobs for two reasons. First, we may need to wait awhile, like if we want to schedule a follow-up email in a week. The other reason is performance. Maybe we need to do something computationally expensive and don’t want the user to wait.

A job manager gives us a scheduling system — some way to store jobs until they need to be run — and a job runner — some tool for running the jobs. WordPress’ wp_cron sort of fits this description. However, through using WP_Queue, I’ve found it fits my needs better.

The WP_Queue

The WP_Queue  package looks great for a few reaons. First, is that the jobs are abstracted from the runner and scheduler. I can write a job class and test it as unit in isolation. Secondly, the jobs scheduler is abstracted. By default, jobs are recorded in the WordPress database, and then when they are needed, they are scheduled with wp_cron. But I can also use a development driver that makes them synchronous and there is a Redis driver in progress. So let’s get started.

Set Up

The WP_Queue library is a composer package. First, install the package in your plugin:

View the code on Gist.

You will need to add the database tables for the scheduler. The Readme for the package has instructions. Add this to your plugin’s activation hook or wherever you add your own tables.

Creating Jobs

If you’ve ever used Laravel’s job queue, the structure of the \WP_Queue\Job class, will be familiar. Your job class has to have a handle() method. That method is called when the job is run. You probably will have a __construct() method as well.

The key concept to understand about these classes is that the properties of the class are serialized to the database when the job is scheduled and used to instantiate the class when it is run.

For example, let’s say you wanted to create a job to run whenever a post is saved. In the __construct() method, you would pass the post ID and use that argument to set a property. When the handle method runs, that property will be set with the post ID. In the handle method, you can use that property — the saved post ID that was saved with the job — to get the saved post from the database. Here is what that looks like:

View the code on Gist.

Copy A Post’s JSON To A File When It Is Saved

What we’ve done so far is a job that gets a post from the database. That’s cool, we could start a runner, schedule this job to run every time a post is saved, and make this job get the saved post’s REST API response and write it to a JSON file. Let’s work through that list backward.

Why that order? That last requirement will be one class, which I can test in isolation, and if it works, then I can setup the runner and hook into the save_post action knowing that job works. If I did it the other way around, I wouldn’t know if my problem was the job, or the queue. That’s bad science.

I’ve written about a similar appraoch to running jobs after a post is saved before with Torque using a different task runner.

Write A Post’s JSON To A File

Let’s keep working on our job class. We have everything we need to get the post. We need to get the response object that the WordPress REST API would create for the post, serialize it to json and write that JSON to a file.

I wrote a post for Torque awhile ago about how to get posts from the WordPress REST API without making HTTP requests. I stole most of this code from there:

View the code on Gist.

This is the complete job. I used the WordPress REST API’s post control to create a WP_REST_Response and used json_encode() to make it a string and saved it to a file named for the post slug.

Where you stored the file and what you call it depends on your needs.

Right now, we can instantiate this class to test if it works. We can run it directly with something like this, where $postId was the ID of a published post:

View the code on Gist.

Then you should see in the path you set for writing a file, a file with the JSON representation of the post. We can also write an integration test for it:

View the code on Gist.

Scheduling The Job To Run When The Post Is Updated

Now that we know our job would work if it was scheduled to run, we need to schedule it to run, whenever a post is saved. The save_post action fires when a post is saved, so we can use that. In the callback function, we will instantiate the job class and pass it to wp_queue()->push().

View the code on Gist.

That’s enough to schedule it to run, as soon as possible.

Delaying Job Execution

The second argument of the push()  method is the delay time to run the job. The last code example didn’t use that argument. The job will run as soon as the queue gets to it. If we wanted to delay it by 5 minutes, we could pass 600 — 5 minutes in seconds — as the second argument.

View the code on Gist.

Setup The Job Runner

Last step: we need to setup the queue to run. We can run the queue using wp_cron:

View the code on Gist.

When you are doing local development, it’s better to use the “sync” driver, instead of the database driver so jobs run synchronously, IE right away. This makes testing easier.

View the code on Gist.

Use Without WP-CRON

I’m not going to enumerate the many shortcomings of wp_cron. But, what’s really cool about WP_Queue is I can set up my own system to run it. For optimal perfomance, I don’t want to rely on WordPress to trigger wp_cron and I don’t want to add more “cron” jobs to its queue. Instead, I’d rather add a REST API endpoint and ping it with an external cron job, such as one run with setCronJob.com or something.

We can access the queue with the function wp_queue(). That returns a Queue object that has a public method called worker(). That returns a Worker object with a process() method that will run one job if any jobs are available to be processed. We can use a while loop to run jobs until process returns false, and use an incrementing integer to keep the loop from running for too many requests:

View the code on Gist.

In this example, I’m hooking it up to a REST API route, that would allow an external cron service to trigger the queue. You manage your server, it would be most secure to trigger that function with a real cron or a command that can only be run from inside of the server/ container. If that’s not possible, at a public/secret key authentication check or something.

The post Using the WP Queue to Copy REST API Data to Files appeared first on Torque.

]]>
https://torquemag.io/2019/01/using-the-wp-queue-to-copy-rest-api-data-to-files/feed/ 2
Adopting TDD for an Existing Plugin https://torquemag.io/2019/01/adopting-tdd-for-an-existing-plugin/ https://torquemag.io/2019/01/adopting-tdd-for-an-existing-plugin/#comments Thu, 24 Jan 2019 19:00:48 +0000 https://torquemag.io/?p=86363 Test-driven development (TDD) is a philosophy of software development that is based on writing tests before writing a feature or bug fix. This is a big difference in terms of how you think about development. I find it to be very hard to move to TDD, both because it’s a different mental model and because it’s hard to change how you approach a code base. Before TDD, I wrote code that should work, tested it manually in my browser to see why it didn’t work, fixed it and then wrote tests. With TDD, I write functions declarations and tests that […]

The post Adopting TDD for an Existing Plugin appeared first on Torque.

]]>
Test-driven development (TDD) is a philosophy of software development that is based on writing tests before writing a feature or bug fix. This is a big difference in terms of how you think about development. I find it to be very hard to move to TDD, both because it’s a different mental model and because it’s hard to change how you approach a code base.

Before TDD, I wrote code that should work, tested it manually in my browser to see why it didn’t work, fixed it and then wrote tests. With TDD, I write functions declarations and tests that describe how the code should work and identify what criteria would make me believe the code worked. Only then do I make it work. The definition of “works” is no longer ambiguous, I have a standard to judge myself on that can be enforced programmatically. By writing out my test criteria in code, the person doing the code review can address if the testing criteria is correct or not. That last part avoids the ambiguous “works for me” statement between team members and the follow up to figure out what is different — test environment or testing criteria. With manual QA, unless you document every testing step, you just don’t have that.

In this article, I’m going to walk through using examples from my plugin Caldera Forms of using phpunit to write TDD pull requests.

This is not an article on how to do testing in WordPress development. I have written about PHP unit testing, PHP integration testing, JavaScript unit testing, JavaScript integration testing for Torque before. This post is based on my experience adopting TDD for our plugin Caldera Forms. I prefer TDD and think it is faster, especially in JavaScript development than manually testing with the browser. When done right, it makes code more maintainable.

A Quick Introduction To TDD

If you’re new to TDD, you might find it easier to think of it as test-first. Here are the steps in an easy to follow list:

  • Write enough of the functions you need to describe them in tests.
  • Write failing tests.
  • Make tests pass.
  • Code review.

When using TDD, you have to make and push commits that do not work. Pushing to a master or your develop branch before the tests pass would mean every change breaks the main branch, which is a problem. That’s why git flow makes sense. You open a new branch, make incremental changes and only merge the changes when it passes automated checks such as tests, lints, code quality scanning, and a human code review pass.

Let’s talk about a contrived example so it’s simple and then look at a real-world example.

Let’s say your requirement was to create an object for adding two numbers. First, i would write a function with no body:

View the code on Gist.

Then I would add two tests to ensure that asserted the rules of mathematics were being followed:

View the code on Gist.

Then I would change my original function — in a separate commit — to actually work:

View the code on Gist.

The reason why the failing tests get committed before the code is made to work is for two reasons. First, commits should do one thing only when possible. Secondly, it’s possible that after you make the tests pass, you or the person code reviewing the change may decide the tests were correct but the implementation was not. If that happens you can revert the commit with the implementation without losing the tests or doing fancier git and then start over.

TDD Pays Off Later

This code is testing JavaScript right now. But having a well-tested function in place allows you to safely iterate on it. Suppose you needed to add a third argument to optionally round the result to a specified number of decimals. First change the function signature:

View the code on Gist.

I made this an optional argument to maintain backward-compatibility. To ensure that my assumption is correct, I’m not going to change the existing tests. They prove that backward-compatibility was maintained. Changing existing tests smells bad and is a sign that your tests are too rigid.

Instead, I would add two more tests:

View the code on Gist.

By implementing tests at each stage, I have insurance that my improvements are actual improvements, and not causing new defects. When you have to modify existing code in the future, the time you invested in tests pays off.

How Much Testing Coverage Do You Need?

It depends on what you are building. The most orthodox rules of TDD I can find come from Uncle Bob:

  1. You are not allowed to write any production code unless it is to make a failing unit test pass.
  2. You are not allowed to write any more of a unit test than is sufficient to fail, and compilation failures are failures.
  3. You are not allowed to write any more production code than is sufficient to pass the one failing unit test.

That is a standard that is very rigid and can easily lead to tests that make changing the codebase harder. Rember, the goal is to increase, not decrease development velocity.

Kent C. Dobbs — an engineer at PayPal who is also the author of a course on testing JavaScript applications — has a great post on this topic. He argues in that post that  “you get diminishing returns on your tests as the coverage increases much beyond 70%”. I don’t love that statement, but he is more experienced than me, by a lot. He does note that his open-source projects have 100% coverage because they are “ tools that are reusable in many different situations (a breakage could lead to a serious problem in a lot of consuming projects)” which sounds like a rule that would apply to a WordPress plugin. He also writes that hiss OSS project are  “relatively easy to get 100% code coverage on anyway.” Which doesn’t sound like a lot of WordPress plugins.

Personally, my rule is more coverage than we currently have. Lack of tests is a technical debt that comes due later. If writing tests now take more time, it’s worth it. For brand-new code, it forces you to write code using testable patterns. Having to refactor code so it’s testable first is a pain sometimes.

Isolated unit testing in WordPress is not simple. Often times, automated UI tests using Cypress.io or Ghost Inspector has served me better. I can cover a lot of functionality quickly without having to worry about the fact that the code isn’t really testable.

Adding A Feature With TDD

I’d like to walk through an example of a TDD pull request I made to Caldera Forms. In this case it was a new feature — adding a setting for maximum file upload size. One part of TDD that I like is it forces you to figure out what new functions you need BEFORE you write them. I don’t know about you, but I’ve written a lot of code that took a lot of time to get working only to realize I didn’t need it. TDD forces me to think through my plans before moving forward.

Here is the pull request on Github if you want to read it: https://github.com/CalderaWP/Caldera-Forms/pull/2823

I should note that this PR is weird because we had to merge multiple in-progress branches from related changes together. Adopting TDD is messy and I claim no perfect adherence to its laws.

Writing Failing Tests

Sometimes it’s hard to do that all at once. For example, in this case, I needed to develop some utility methods to read field settings and do the file size check and I needed to integrate those utility methods into the existing code. I chose to do that in two steps. I got the utility methods working and tested and then I moved to use those new methods.

Here is my first commit: https://github.com/CalderaWP/Caldera-Forms/pull/2823/commits/d878e9d501af6aae92d72e45340a185dea1e9c69

If you look at the code, I added two utility methods to a class, and gave them no function body:

View the code on Gist.

That’s it in this commit for the code I’m developing. I also committed tests that demonstrate how those functions should work:

View the code on Gist.

These tests show how the new methods are supposed to work. The inline comments explain why each assertion is being made. That process forced me to think about how the settings, which at this point had no UI, should be structured and how I would later use them.

My general rule is that a commit should do one thing only. This commit adds the new methods and the failing tests. The word “and” in the previous sentence shows that I had violated that rule. I probably should have done two commits. More importantly, I want to note that I spent a lot of time working through the logic of what to test and there are a lot of pre-commit revisions there.

Making Tests Pass

In the past, when I was not using TDD, I would have had to test this by adding the UI option, creating a form with that option, then submit the form with a file and see if my code worked as expected. Using xDebug to step through the code and examine the results of the functions helps that process a lot, but it’s SO slow. Also, once it does work, there is no way to know if anything else breaks that feature later.

This is why I find TDD faster — when its possible — and saves time and worry in the future. Running the whole test suite between each commit would make the process very slow though. For JavaScript testing I use Jest as my test runner and it can be easily configured to run only the tests on code that has changed. That leads to a strategy where I write failing tests, get all of the tests related to changes to pass and then I tell it to run all of my tests to make sure nothing unexpected happen.

Here is an article on how to do something similar with phpunit. My personal solution is to use a “now” group annotation in my docblocks. In phpunit, you can use @group to group your tests by feature. Then you can run only tests in that group with the –group flag in the phpunit cli. In Caldera Forms we have a composer script to run the @now tests.

With the tests in place, I was able to start working on my two new methods and run just the two tests each time and see why each test was failing. In the process, I saw both PHP errors, warning and notices as well as test failures. I got pretty annoyed at myself at one point for code that should have worked and I had no idea why not, but at least I had proof I wasn’t insane.

I should also note that the reason I wrote two tests with multiple assertions is that they fail faster that way. If I had one assertion per test as is often recommended and is generally a good idea, I would have seen 10 or so fails per test. That’s a lot harder to make sense of. Organizing one test with one assertion after another helps solve one problem at a time.

The actual functions I wrote look pretty simple:

View the code on Gist.

My tests cover a few different situations that are easy to overlook. For example, what happens if the setting doesn’t exist. Conditional logic based on the contents on an associative array is tricky in php because indexes may be missing, values may be represented with different scalar types — what if the integer 1 or the strings ‘1’ or ‘true’ is used instead of the boolean true?

I definitely thought, this is simple, I don’t need tests while working on these methods. Also, my first attempt didn’t work and I only know that because my tests failed and failed in a way that helped me see why.

Moving Beyond Isolation

The tests so far were technically integration tests because the environment requires WordPress to work and I did not think using mocks for unit tests was worth the pain. The rest of the tests were mainly actual integration tests.

For Caldera Forms file fields, we have a separate endpoint for uploading the file to. In this case, I didn’t need to touch that endpoint’s handler, because I didn’t mix this logic into the API handler. The API handler’s responsibility is the interaction of the WordPress REST API and a seperate class “UploadHandler” that does the upload, using data passed from the REST API.

That meant, I only had to make the changes in my “UploadHandler”. That class was changing. The change that was being made was it needed to enforce the file size limit. The business logic was elsewhere. I just needed to make sure that if I gave it too large of a file. I needed to make sure that with the right size file it worked the same way, and it threw an exception when the file was too large. Here are the three new tests:

View the code on Gist.

The first test — testKnowsFileIsTooLarge() — does not do all of the permutations of tests that I had for the utility method I had previously created, because I already know it works. I was just checking that function works in this context.

The second test — testExceptionWhenFileIsTooLarge() — ensures that the result of that test passing is that an exception is thrown. Notice that I didn’t use try/catch pattern. I used phpunit’s expectException. That’s the right way to do it according to the phpunit manual and makes it simpler to write than running assertions inside of a try/catch, but it means that the test code looks less like the way someone would actually use the code, which smells a little bad to me.

The third test — testAllowsFileSize() — make sure that when the right sized file is passed, it works as expected. This test doesn’t do anything super specific. I mainly added it because the existing tests didn’t account for these settings. It’s an integration test that will fail if one of many things go wrong. Which tests it fails with will indicate more clearly what the issue is.

It’s Worth It

Adopting a test-driven approach to development can help a lot, especially as your team grows. Even for a solo developer, being forced to think about what changes you need to make before making those changes has a ton of benefit. In addition, having the tests in place before the implementation means you’re not spending time or brain power on testing or devising ways to test.

Think about all of the different informal tests you’ve set up while working on a feature. How many times have you called a function and var_dump()ed the result until you get the right result and then delete that code and move on? That’s the same basic approach as writing a test that asserts the result of the function is what you expect. Don’t you wish you could have kept those informal tests with your code base forever?

The post Adopting TDD for an Existing Plugin appeared first on Torque.

]]>
https://torquemag.io/2019/01/adopting-tdd-for-an-existing-plugin/feed/ 2
Testing jQuery with Jest in WordPress Development https://torquemag.io/2019/01/testing-jquery-with-jest-in-wordpress-development/ Tue, 15 Jan 2019 16:00:29 +0000 https://torquemag.io/?p=86338 I recently wrote a series of posts on testing React apps and a series on using phpunit for testing WordPress plugins. Those covered testing brand new code and writing it in a way that is testable. One of the tricky things about adopting tests in a legacy code base is that the code is often written in a way that makes testing harder. In this article, I’m going to look at two ways that jQuery is hard to test. First I will show how to use mocks to artificially isolate your code from jQuery itself. Then I will give an […]

The post Testing jQuery with Jest in WordPress Development appeared first on Torque.

]]>
I recently wrote a series of posts on testing React apps and a series on using phpunit for testing WordPress plugins. Those covered testing brand new code and writing it in a way that is testable. One of the tricky things about adopting tests in a legacy code base is that the code is often written in a way that makes testing harder.

In this article, I’m going to look at two ways that jQuery is hard to test. First I will show how to use mocks to artificially isolate your code from jQuery itself. Then I will give an example with code that runs on a click and code used for making AJAX requests.

Quick Introduction To Jest Mocks

One of the things that is tricky about unit testing is that not all code can be written to be perfectly isolated from global systems such as DOM events or WordPress hooks. Mocking libraries help you artificially remove systems that are not what the current test is covering. For example, 10up/WP_Mock can be used to replace the WordPress plugins API with testing mocks.

This article is about testing jQuery with Jest. Jest is developed by Facebook and is generally used to test React apps, but it can be used with other test libraries. One great feature of Jest is its mocking capabilities. The simplest use of Jest mocks is to count the number of times a function is called. If your test is a function that calls another function, you just need to know that function is called.

Let’s look at a test for a function that takes an array of items and then applies the same callback to each of them. In this snippet I have updateItems — the function to test — and updatePosts() which uses that function to pass an array of posts to updatePost(). Later in this article, I look at how to test jQuery.ajax() calls. For now, I’m just worried about making sure my updateItems() dispatches the callback:

View the code on Gist.

Now, let’s look at the test. I’m not even going to give it mock posts at this point, just an array with three items and then assert my callback was called three times:

View the code on Gist.

The key line here is line 4. On that line, I create a function called “callback” using jest.fn(). As a result, I can count the times it’s called using the callback.mocks.calls.length.

That tests that my function was called the right number of times. It does not show me that it got the right data. For that we can use the calledWith utility function of jest.fn():

View the code on Gist.

Separate Concerns First

One of the biggest obstacles to adopting testing in a legacy code base is that your code may not be easy to test. Isolated unit testing may be impossible. You can still write tests of the DOM with a browser automation framework such as Cypress.io or similar. You can also use something like dom-testing-library to test the DOM.

But simple refactors can isolate your business logic from the DOM event system. Here is an example where jQuery is used to bind to a click event and then add or remove classes based on a condition:

View the code on Gist.

You could render all or part of the DOM, simulate the click and then test if the DOM elements have the right classes. That’s slow and its testing a lot of things that are the responsibility of jQuery, not your code.

Your business logic is your business, jQuery’s event binding and dispatching system is not. The snippet of code I showed above does many things, a violation of the single responsibility principle. Let’s break it up into two functions. One function takes jQuery as a dependency and then executes the business logic. The other function’s responsibility is to wire the isolated logic that is encapsulated in the first function to jQuery’s event system.

View the code on Gist.

Now let’s look at how to test this function with our business logic. Because we pass jQuery in as a dependency to the function, we could pass any function there. Such as a jest mock. Because we’re not going to be testing the effects of these functions — adding and removing classes from the right DOM elements — we just need to know that the functions are called.

The basic mock I showed before is not sufficient for this test. I say that for two reasons. First off it doesn’t have a constructor so the jQuery constructor call, which we’re not actually testing would throw an error. Second, we need to be able to count the calls to separate methods.

Here is a test that solves the first problem but not the second:

View the code on Gist.

This will pass with a proper constructor. But, all we know is two methods of this object were called. Which ones? We don’t know, and that matters as in our test we need to make sure removeClass is called but addClass is not. That’s the business logic we’re testing.

The solution is to put those methods in their own variables, we can check:

View the code on Gist.

Now we’re testing that the business logic leads to the right function being called. We’re not testing the effects, just our logic. Mission achieved.

What About The Event Binding?

I totally didn’t cover the actual event bindings. I don’t care.

Why?

First, I really doubt that I will have an issue there. If there is, that’s a big problem that will be surfaced by acceptance tests and integration tests built that run against a real website and simulate user interactions will fail hard if jQuery is not working properly and that gives me more confidence in my event bindings then any mock event I will create for tests. If the business logic is tested, I’m good.

Testing jQuery AJAX With Jest

Testing anything that involves an HTTP request can be tricky. Getting rid of side effects first is important. Breaking the business logic apart from the jQuery.ajax() API can allow for a similar testing strategy. Consider this jQuery AJAX usage with three callbacks:

View the code on Gist.

This is pretty common, I copied it out of something I wrote a while ago. One way to think about testing this code would be to leave it as is, but come up with a way to mock the API. That doesn’t make sense to me if the API is covered by its own tests and jQuery AJAX has its own tests. Instead, think of it as three functions:

View the code on Gist.

Here are three functions that we could use. Isolating them into functions means they can be reused, which is great. In addition, we can pass the two global dependencies — jQuery and Handlebars into the functions. These types of functions are not pure by design. The term “pure” in this context means a function with no side effects. These functions modify the DOM using global-scoped APIs and that’s fine if we can easily replace the global-scoped APIs with Jest mocks.

Here are tests that just check that the right function in the mock object are called. In one place — the argument for the error function — I am concerned that the right value is passed to that method, so I check with Jest’s expectToBeCalledWith. The other functions I’m mocking, I trust they work, as long as they are called. Calling them in the right order is my concern, calling them with the right data is my concern, what they do is not my concern.

View the code on Gist.

What About The API Request?

For the most part, I don’t care for the same reasons I gave for the event bindings. API endpoints get their own isolated tests. Also, I have acceptance tests. If I was building an API client, then I would need to test it with mock responses. For that, I would use a mocking library for the AJAX requests.

I prefer Fetch to jQuery.ajax() for a few reasons — it’s built into the browser and works the same on the server and there is a really useful mocking library for it. I wrote a bit about how to write unit tests for the Fetch API here.

Mock On

In this article, I’ve covered a lot about Jest mocks. If you’re looking to learn more about Jest mocks, I recommend this post. My goal with testing JavaScript is to use as little additional layers on top of Jest as I need. If I can do a test with just Jest. That can go too far, refactoring code or writing too many mocks that then need changed to match the changes in the code, therefore removing the point of having the tests anyway.

That’s a balance that is hard to find and is very different than writing PHP tests, where isolated unit tests and integration tests covering how a few classes are wired together should be all that is needed to describe database APIs or a REST API or a class that handles business logic as long as none of them have concerns. UI testing — what we’re covering in an application that only uses JavaScript in the browser — is trickier. The UI is where all of the parts come together, so isolated unit tests can easily restrict instead of accelerating code velocity.

The post Testing jQuery with Jest in WordPress Development appeared first on Torque.

]]>
Sharing React Components With Gutenberg https://torquemag.io/2018/11/sharing-react-components-with-gutenberg/ https://torquemag.io/2018/11/sharing-react-components-with-gutenberg/#comments Wed, 21 Nov 2018 16:00:08 +0000 https://torquemag.io/?p=85574 The new WordPress block-based editor Gutenberg is coming to the WordPress soon. While no one has yet defined where Gutenberg will next be used, its been well architected for reuse, which is great, because plugin developers can now use these components in other interfaces, in the WordPress admin and beyond. Because the Gutenberg team is currently moving a lot of the code that is most likely to be reused into npm modules, this makes it very simple to reuse the Gutenberg components in a React app, even if it isn’t in WordPress. That’s not the only way to import Gutenberg […]

The post Sharing React Components With Gutenberg appeared first on Torque.

]]>
The new WordPress block-based editor Gutenberg is coming to the WordPress soon. While no one has yet defined where Gutenberg will next be used, its been well architected for reuse, which is great, because plugin developers can now use these components in other interfaces, in the WordPress admin and beyond.

Because the Gutenberg team is currently moving a lot of the code that is most likely to be reused into npm modules, this makes it very simple to reuse the Gutenberg components in a React app, even if it isn’t in WordPress. That’s not the only way to import Gutenberg components or utilities to your project if it is in WordPress, which I will discuss in this post.

This post will show how to share React components between Gutenberg blocks, non-Gutenberg wp-admin screens powered by React, and React apps. This is based on work I am currently doing to make my plugin Caldera Forms Gutenberg friendly and rewrite a lot of the user interface in React.

Bringing Gutenberg Modules Into Scope

Gutenberg’s code base is broken up into various modules, for example, “data” for state management or “components” for the UI components the editor is constructed from. This pattern helps navigate the code base since each module is a top-level directory. It also helps tell us how to import a module or component. For example, if we want to use the SelectControl component, we access it via the components module — wp.components.SelectControl.

Let’s look at three options for accessing these modules. The first does not require webpack. The other two do require webpack or to be adapted to some other build system.

Using The wp Global

All of Gutenberg’s libraries are accessible through the global-scoped variable “wp”. That means that the simplest way to bring components into scope is accessing the wp global. This works just fine if assets are managed using wp_enqueue_script and you set the right dependency is set for your script.

In this example, WordPress’s render function, create element is accessed via wp:

View the code on Gist.

That’s it. Works in a block’s JS file or anywhere else in WordPress if you set “element” as a dependency when enquiring your script.

Using wp As A webpack External

If you look at Gutenberg’s source, and you should, it’s a great read, you will see webpack imports like this

View the code on Gist.

This is actually the same thing as accessing the wp.element global. Gutenberg sets up a webpack external for each of the entry points and packages. This is a good pattern that serves traditional WordPress and the more modern webpack well.

You can set up a similar webpack to act as an alias in your plugin. I built a block plugin for alert messages as example code for my WordCamp talks. The webpack config has externals for several Gutenberg packages setup.

View the code on Gist.

Then you can import element module with the same syntax Gutenberg and the Gutenberg documentation use.

View the code on Gist.

Using npm

As I said earlier in this post, the modules of Gutenberg are or will be installable via the JavaScript dependency management system npm. If you are installing a WordPress package ina  WordPress plugin, you probably should install it as a development dependency. That way you can use webpack imports, have the module work in your tests, but not add the module to your production build. In WordPress, the dependency is loaded using wp_enqueue_scripts. If you are not developing for the WordPress environment, reverse that advice.

For example, to install WordPress’ element module in a WordPress plugin:

View the code on Gist.

Or to install for use outside of the WordPress environment:

View the code on Gist.

Then to bring the dependency into scope, import it with webpack:

View the code on Gist.

That’s the same as the last few examples. That’s the point really. That line of code works in any context — Gutenberg blocks, other wp-admin screens, apps deployed outside of the WordPress environment and with a little more care, tests.

Managing The wp Global

WordPress uses global state. That makes things complicated, but Gutenberg’s use of the wp global variable is the most manageable global state we’ve ever had as WordPress devs. Let’s look at some gotchas I’ve run into because of the unpredictability of global state and how I fixed these issues.

For Non-Gutenberg wp-admin Screens

In a WordPress plugin, using the WordPress Babel preset makes a lot of sense to me. It keeps the Babel config pretty simple:

View the code on Gist.

{
	"presets": [ "@wordpress/default" ]
}

One thing that this does is use WordPress’ element to compile JSX. That’s good, as long as the global variable wp.element is set. It is in Gutenberg screens. This can be an issue if your components import React.

I ran into this problem when using React components for a wp-admin screen that shares components with my Gutenberg block. The solution, at the time @wordpress/element was not on npm, was to do what Gutenberg does – define wp.element equal to React.element

View the code on Gist.

This isn’t a scalable solution, but it works. I will refactor this code to use @wordpress/element as I described above. But, this is likely going to be an issue for anyone maintaining React and WordPress code together.

In Tests

One great reason to use React is Jest. I love Jest. Jest is the easiest testing tool I’ve ever used. This isn’t a tutorial on Jest, but I do want to cover setting up tests for components shared between Gutenberg and other React apps.

Because we use Jest in Caldera Forms, we need to make sure that wp.element is defined. Currently we are using a shim, copied from Gutenberg, to prevent errors in our tests. Here it is:

View the code on Gist.

View the code on Gist.

global.wp = {
	shortcode: {

	},
	apiRequest: {

	}
};

Object.defineProperty( global.wp, 'element', {
	get: () => require( 'React' ),
} );

Looks familiar right? It’s the same thing WordPress does. In fact, there are modules on npm published recently to provide a simple, repeatable solution for this. Keep an eye on what is getting published to npm, in the @wordpress organization scope.

We do need to tell Jest to use that setup file. Here is a complete Jest setup to add to package.json:

View the code on Gist.

React and WordPress

I’ve written quite a bit about choosing between React and VueJS. Before Gutenberg, I was on team VueJS. But, learning Gutenberg development required me to take a deep dive into React and re-evaluate my original, negative opinions of JSX — the templating language that is generally used for React components.

You do not need to know React to develop for Gutenberg, but it really helps. You also do not have to use React. I have used Vue for block UI, using Gutenberg to supply state to Vue components. It’s pretty cool actually, but it’s an unnecessary layer of complication that would require a lot of good reasons to keep both Vue and React in your webpack bundles and have to think about both frameworks.

Vue and React manage state very differently, so keeping the rules of both, and the different syntax of the templating languages doesn’t scale mentally.

Redux(-like) State Management With WordPress Data

So, I like Vue a lot, but once WordPress made me reconsider JSX again and see how it could be used really well in a WordPress plugin, I was sold on React. One pain point for me with Vue was state management. I felt that Vuex, the recommended state management solution required too much boilerplate and was hard to integrate with components without effectively creating global state. I was probably doing it wrong, but Vuex just never clicked for me.

Redux, which is the standard — for now — for state management in React apps, makes a lot more sense to me than Vuex. That’s my #2 reason for moving to React. Because WordPress  now uses an abstraction on top of React for Gutenberg, using both Vue and React does not seem practical. The more we integrate with Gutenberg, the more using React is the simplest solution. I also love how the pattern of using container and presentational components with React + Redux help keeps concerns separate and unit tests simple.

The Redux abstraction in Gutenberg, available on npm as @wordpress/data, makes Redux simpler. By registering a “store” with @wordpress/data actions and reducers are linked and there are utilities for subscribing to changes, selecting data, making API requests and higher-order components for injecting state.

In my last post for Torque, I covered the basics of state management for Redux with WordPress.  The example code was taken from the Caldera Forms processors UI library which I am working on. This is an example of a use case that has to work in the post editor, in other wp-admin screens and outside of a WordPress environment, since Caldera Forms Pro is a Laravel and Node app.

For example, here is a presentational component that encapsulates the entire processor UI, without any state management.

View the code on Gist.

We can call this component “controlled”. It’s not aware of state, its state is totally controlled by some other system. This component gets “wrapped” in the withSelect higher order component, so it can access data from the store.

This component is a container for the components that make up the UI. It’s responsibility is to compose the UI with child components that use the container’s props.

Note that I’m using the prop-types library to tell React what types of props the component must receive. I love doing this. Using prop-types provides strong typing for React components without having to learn and setup Flow or Typescript. I do use Flow on some projects, but for the most part I find prop-types to be more than enough validation.

What I really like about prop-types is if I fail to follow the rules, my Jest snapshot tests will not work, and the errors they raise when they fail will tell me which component is being passing the wrong props and where.

This presentational component gets “wrapped” in the withSelect higher order component (HOC), so it can get data from the store and provide that to the presentation component.

View the code on Gist.

The withSelect HOC lets us read data from state. We also need to send changes to the store. We do this by wrapping the component with the withDispatch HOC:

View the code on Gist.

Here is the complete wrapped component, that can select from state and dispatch changes in state:

View the code on Gist.

There is no coupling between Redux and the store. The rules of the connection, which the wrapper component has the sole responsibility for, is defined based on the component props, which are just arguments passed to a class. The presentational component is passed the prop with the change handler function and it calls it. What that function does is the responsibility of a different component. As long as the public API of the component and the change handler function stay the same, any change can happen in the component or function with that responsibility. Unit tests with Jest detect these changes.

With Jest, we do not want to test React. We want to test that our component has the right props and if there are change handlers, they fire and emit the right data. Jest has a great snapshot testing tool. This tool renders the component and stores it as JSON. Then in future runs the snapshot is recreated and compared to the stored snapshot. Any changes and the test fails.

When these types of tests fail, that indicates that the different props were passed to the component or one of its children. This could be intentional, and that’s fine, you can just save a new snapshot. Or it means that they way that your components are wired together has changed in an unintended way and a regression bug has been introduced.

That’s It

There is nothing fancy here. Set up webpack imports and use them, installing dependencies as needed. If you’ve learned to use npm and use it for React development, you know how to do this. It’s exciting to see WordPress start to work more like a modern web app in this way. Much DUX.

Once you have everything setup to import with webpack and have the wp global managed, which environment you are in Gutenberg, WordPress other or not WordPress does not really matter. That’s really cool.

The post Sharing React Components With Gutenberg appeared first on Torque.

]]>
https://torquemag.io/2018/11/sharing-react-components-with-gutenberg/feed/ 1
Testing React Components With Enzyme https://torquemag.io/2018/11/testing-react-components-with-enzyme/ Wed, 14 Nov 2018 16:31:46 +0000 https://torquemag.io/?p=85549 So far in my series of posts on React development for WordPress, I’ve covered React basics, create-react-app and testing with Jest. Jest is useful for testing the rendering of React components. There are a few big buckets of functionality we have not looked at testing yet. Specifically how the internal state of stateful component changes, DOM events, and isolated testing of component class methods. Jest alone can’t do that. Using Enzyme For DOM Testing I hope it’s clear now how Jest, with the default test renderer, can do a lot of test coverage quickly. A snapshot covers one component, with […]

The post Testing React Components With Enzyme appeared first on Torque.

]]>
So far in my series of posts on React development for WordPress, I’ve covered React basics, create-react-app and testing with Jest. Jest is useful for testing the rendering of React components.

There are a few big buckets of functionality we have not looked at testing yet. Specifically how the internal state of stateful component changes, DOM events, and isolated testing of component class methods. Jest alone can’t do that.

Using Enzyme For DOM Testing

I hope it’s clear now how Jest, with the default test renderer, can do a lot of test coverage quickly. A snapshot covers one component, with all of its props. One limitation I mentioned was it doesn’t render the components to any type of DOM, so there is no way to trigger an event. Luckily Enzyme, which can be run by Jest lets us do exactly that — simulate DOM events. It also has cool features to check the props and state of a component or call methods of class directly.

Enzyme is not installed in create-react-app by default. To install Enzyme and the React adapter for Enzyme:

View the code on Gist.

Now, to setup Enzyme in a test file, we import one or both of its renders — mount and shallow — and the React adapter. The React adapter must be added to the Enzyme instance for this scope. I don’t use any of its options. My test file headers for Enzyme tests look like this:

View the code on Gist.

Shallow vs Deep Rendering React Components For Enzyme Testing

Enzyme has two renderers — mount and shallow. Shallow will shallow render the component which means it will not render any of the components children. That makes it faster than mount which performs a deep render and therefore will render a component and all of its children.

If the limitation of shallow is not a problem for a test, then use shallow, it’s faster. If you need to test children of a component, use mount, because that way it will work. For now, we will use shallow as we’re just testing one component. I’ll cover a few cases where mount is the better choice later.

Simulating Change Events With Enzyme

Rendering a component with Enzyme’s shallow is similar to what we did before with Jest’s react render. But the object that is returned is a lot more useful. We can use Enzyme’s find() method, with jQuery-like selectors to find elements in the rendered component. We can assert values based on what it finds or simulate events.

For our post editor, let’s find the input, change it and see if our change handler function behaved properly. Here’s the test, I’ll walk through it below:

View the code on Gist.

At the top, I’m importing my component and the test tools. I created a test suite for all events of this component and one test. The mock post is the same as I used for snapshot tests. This actually goes in the same file. I’m cutting those tests out for clarity here.

I introduced the need for Enzyme by noting my snapshot tests used a change handler that did nothing. Now that those tests prove this component has its props set up properly for that change handler function prop, let’s add a test to prove it works. The benefit of dependency injection is here — we can test the update in isolation, in a component, when its rendered inside of another component. Each test adds more layers of coverage, without creating strong coupling.

This time, let’s create a change handler that updates a variable in the test’s scope. Then we can ensure it revived the right value. Earlier I said that I think a component like this should pass the whole updated post, not just the title when it changes.

So that is part of what we will test. I’m going to test the value of the updated object’s title.rendered property. My change handler captures that.

Let’s zoom into the change event though. React passes a synthetic DOM event to the callback handler. This is a great example of polymorphism in object-oriented programming. In React, we should never touch the DOM, we touch the virtual DOM abstraction. Therefore we have to interact with an abstraction of a JavaScript event that works the same way as a “real” JavaScript event. I use “real” in quotes because it is actually JavaScript’s abstraction over the web API that React is extending. Everything is an abstraction, nothing is real, we live in a simulation. As a result, we let React deal with cross-browser issues.

This also makes events easy to simulate:

View the code on Gist.

We do not have to mock the whole event, just event.target.value. Manual mocking like this is limited and you may wish to use Sinon to make mocking more manageable.

Testing Loops With Enzyme

Earlier in this post, I created a component to list posts. They will be wrapped in a specific class. One way to test this is to find all of the elements with that class, and count the length of the results.

Note, I’m using the word “class” as in “CSS class” not as in “extends the React.Component class.”

View the code on Gist.

In this test, I’m using Enzyme’s find method to search for elements with a class. Again, the syntax is jQuery-like. Also, note that I used mount instead of shallow? Why, shallow would not render the child components, which is what I am testing — how this component renders its children.

Class Components Own State

So far, we’ve looked at components that are unaware of their state. They take in props and communicate changes via functions passed in as props. But state does have to live somewhere. That’s what class components are for — they handle state and are aware of React lifecycle events.

Let’s put our Posts component in a container component, and use that component to manage state. Keep in mind, this is what react-redux does. Let’s understand the concept before offloading the concern to Facebook, which in general should be our strategy for managing the fact that there is more work to do than there are work hours in a day. It also feels like a good payback for all of the time that the Facebook stole from me before I uninstalled that addictive behavior from my phone.

What I would actually do is leave my main container dumb and wrap it in a higher-order component from Redux or WordPress that injects state, but that’s a different Torque post and also a video from the JavaScript for WP conference you can watch on YouTube.

Still, stateful components are useful when used sparingly. Let’s turn the component that create-react-app generate as “App” into a container for the post list. Because we started from the smallest part — the post — then worked up from there — post list and now the app the post list sits in — this should be pretty simple. All of the small details are already covered. We’re not starting with a big array of posts and designing multiple components at once. Instead, we’re waiting until we have all of the building blocks built before we assemble them.

Let’s start by adding state to the class constructor, with posts as an empty array.

View the code on Gist.

My posts component expects an array that isn’t empty. What to do when there are no posts is not its concern. So in the Render method, I want to use a JSX conditional to only show the posts when that array is not empty:

View the code on Gist.

Here is my whole component:

View the code on Gist.

Testing State Of React Components With Enzyme

Before we talk about how to get a data into state, let’s make sure that once we do it will work correctly. One thing at a time. One good thing about enzyme is it can directly mutate state and also read state of a component.

Practically speaking, that means we can test if this component is going to work when posts are added to state without worrying about how to add posts to state. That’s a separate complication we will get to at once we’re ready.

What we need to test first is two things: does our loop how posts when state.posts is not empty and does it show nothing as well as not generate errors when state.posts is empty. Enzyme provides a setState method that allows us to call the class’ setState method in our tests. This class is the top-level of our program’s state management, so this functionality should be blackboxed here.

Let’s look at a test that adds posts to state and checks that they are rendered correctly:

View the code on Gist.

This is very similar to the test I showed earlier for the Posts component. We’re just making sure it works properly in this context.

Calling React Class Methods With Enzyme

Let’s say we wanted to add the ability to edit one post with this app. We already have a PostEdit component. But we need to supply it with the right post. Let’s add a property to state to track the ID of the post currently being edited. I do not want to copy that post from state.posts, just its ID.

Finding the post in state is a separate concern, that gets its own method. Let’s look at the constructor and the new method:

View the code on Gist.

Notice how in the constructor, I used the function bind to explicitly bind the constructor’s this to the method’s this. If I didn’t do that, this.state, this.setState and this.props would be undefined. This is an extra step you must take for every single method in a class that uses props or state.

Then I can use this and my PostEdit component in render:

View the code on Gist.

To test this functionality — that the right post is found and the editor shows when the post is found, I will add a dew tests. Each one builds on the last. Our previous test covered post.state, so I can safely to do the same thing, then test the next step:

View the code on Gist.

Once I trusted that worked — believe me, the test did not pass with my first version of this method — I can move on to testing that the editor shows when state dictates it should.

View the code on Gist.

This test proves we get an editor when we should. It doesn’t prove that the editor does not show when it should not. So this test, by itself, could be a false positive. We need a test for the other possibility to prove it is not a false positive:

View the code on Gist.

Testing React Change Handler’s Effect On State

Eventually, if I was to drag this series out for aanother 3-4 posts. I’d add a component to update which post is updated and wire it into this App component. That will require a change handler in the App components state. Let’s add that and test it so we can see an example of how to test state, after a component’s change handler is invoked.

Testing the change handler in isolation, before implementing the component to control the value means that the control is designed around the needs of the interface it sits in, not the other way around. Also, this control is swappable, as long as it works with the change handler, we don’t care what it is or what it changes to later.

Here is the simple change handler function, we can pass down to the control:

View the code on Gist.

Here is the test, which calls that method directly then tests state using Enzyme’s state method

View the code on Gist.

What About Loading Posts?

At some point, you need to actually add the posts. Again, I’d leave that to state management in anything complex. But, if you do want to encapsulate everything in this app, that’s why we need a class — to take advantage of React lifecycle events.

React lifecycle events are like WordPress actions — a way opportunity to run code at a specific place in the program’s execution. The earliest or component can safely do an AJAX-request for data and update state is componentDidMount. That’s the event that runs when the component is mounted to the DOM. Before that event ,we could not update state.

View the code on Gist.

Again, I think that API requests should be handled by Redux, or another state management system. This approach strongly couples API interactions to the UI components, making them less reusable  and means the API client can not be re-used in tests or without React.

Test-Driven React Development

I hope in this series you’ve taken away two things. The first is that by using test-driven development, we can make something simple, ensure it works, and then slowly add complexity. For me, this means developing one small unit of functionality at a time. That’s easier to think about, and easier to work on. I like that.

The other big takeaway here is not to over-use React class components. Using stateless components as much as possible reduces complication and increases reusability and makes functional components easier to test than class components.

Most importantly I hope you’ve seen the value of separating concerns and how React helps follow this practice that the benefit of this software design ideal and React are more clear. In the next post I’ll cover sharing React components between Gutenberg and React.

The post Testing React Components With Enzyme appeared first on Torque.

]]>
Small is the New Big: An Interview with Paul Jarvis https://torquemag.io/2018/11/small-is-the-new-big-an-interview-with-paul-jarvis/ https://torquemag.io/2018/11/small-is-the-new-big-an-interview-with-paul-jarvis/#comments Thu, 08 Nov 2018 16:20:04 +0000 https://torquemag.io/?p=85510 If our server can’t keep up with the number of requests its getting, we have two basic options; scale up, add more servers or server resources, or make the program more efficient. A perfectly-optimized program would never run out of server resources. That’s a great ideal, but its hard and processing power is cheap, so we add a few more cores. Scaling up is our default. Like a lot of developers, I spend a lot of time working on these types of scaling problems. As I’ve grown my business, I’ve learned the hard way that I have to give up […]

The post Small is the New Big: An Interview with Paul Jarvis appeared first on Torque.

]]>
If our server can’t keep up with the number of requests its getting, we have two basic options; scale up, add more servers or server resources, or make the program more efficient. A perfectly-optimized program would never run out of server resources. That’s a great ideal, but its hard and processing power is cheap, so we add a few more cores. Scaling up is our default.

Like a lot of developers, I spend a lot of time working on these types of scaling problems.

As I’ve grown my business, I’ve learned the hard way that I have to give up my attachment taking on an unsustainable amount of work. Like a lot of entrepreneurs, I spend a lot of time learning about how to optimize my personal productivity. I think this will help me do more each day, and often that’s true. Like a lot of developers, I spend a lot of time on development automation. I think this will help me write better code, faster. Sometimes that’s true.

There is a limit to these types of optimizations. At some point, you have to add people. If you’ve already optimized and documented the process, then each person you add could make you more efficient. Not twice as efficient, but more efficient.

Scaling up – growing the team and spending more money – is too often presented as the only option.

I probably signed up for Paul Jarvis’ weekly newsletter to learn about email in hopes of learning some trick to increase my revenue so I could scale up my business problem. I became an avid reader and especially connected with this quote from his bio, “business growth isn’t always good, and isn’t always required.” That’s challenging to me, but someone who does a lot — analytics platform, courses, and now a book — and seemed to be doing it by himself, and enjoying life as he did it. That’s attractive to me.

Paul’s new book “Company of One” argues that small is the new big. I’m pretty curious how he can keep things so small, and have a body of work and products that seem so big. He was nice enough to answer some of my questions:

Torque: You do a lot of things. You’re now a published author and you have the weekly mailing list and Fathom Analytics with Danny van Kooten and the MailChimp course. Can you share a productivity tip or two that you use to keep that all going as a solo entrepreneur?

Jarvis: Yes, I have 3 software products, 3 online courses, 2 podcasts, a weekly newsletter which I write a full-length article for, and I write books.

So, this might sound ridiculous, but the best way to be productive is to do one thing at a time. The only way I can get so much done in the 4–6 hours a day I work is by laser focusing on each task and blocking everything else out.

What it looks like is essentially single-tasking.

First, I haven’t had any notifications on any devices for about 5 years now (and life or business hasn’t exploded). No Twitter blips, dings, red circles or top/right dialog boxes on my Mac. No warning if there’s a new email in my inbox. No announcements from any project management or group chat tool. Nothing. The only thing I let interrupt my work is calendar notifications (to remind me of things like interviews and calls) and text messages (no one texts me unless it’s important).

By doing this I can focus completely on the task at hand. So if it’s writing, that’s the only app open on my computer. If it’s design, then that’s the only app open. And sometimes, it’s Twitter or email, and either of those are the only things open. By doing this, I can get through things quickly—because batching similar tasks gets my brain into the flow of that task.

That said, I like variance in my work, so I really like having multiple projects on the go. It keeps things interesting. But, each project only takes up a lot of my time for a short spell. So I may spend a week on Fathom if we have a big feature push, then an hour a week on it for the next 2 months. Or, I may be writing Company of One for 3 months, then not write another book for a few years. There’s a balance I’ve found where I get to do different (and interesting things) that keep my brain engaged, without having to work on them each, at all times.

With every project, I consider not only the time costs to create it, but the maintenance cost to keep them going. So most projects, like podcasts (which are seasonal for me) or courses (each opens for a week in the spring and a week in the fall) or books (one every few years), require a sprint of focused time, then no work for ages.

Is there anything you miss about your work from before you took the minimalist approach to your business?

It’s always been fairly minimal. Even in the beginning (in the 1990s), I was very much about simple designs and simple solutions for the clients I had.

Working for yourself is freedom—if you do it right—so achieving greater freedom in your business by implementing ideas borrowed from minimalism seems like a win-win. (Or maybe it’s just one win since the second win isn’t necessary and therefore purged. Hashtag, minimalismjokes.)

One of the smartest things I’ve done in my business is a question if “more” is actually better. Which is the complete opposite approach taken by startups and corporations?

Such businesses tend to see growth as the chief indicator of success. More customers is a win! Higher revenue is a win! Greater exposure is a win! And sure, they can be, but not always. And definitely not always when blindly obtained.

Sometimes more customers mean much more customer support. Sometimes more revenue comes at the price of higher investments and expenses (netting less profit in spite of more revenue). Sometimes more exposure means more of the wrong people see you and more of the right people for your business are put off because they think your business is actually for someone else.

Excess ≠ Success (Hi math, I love you!)

Sometimes “enough” is better. For instance, if I make enough money to support my life and save a little, “more” likely only brings more stress, more work, more responsibility. If I already have enough customers that I can personally support, why would I want more if that would mean I had to hire and then manage employees? Remember my note about freedom? Enough means I can optimize for freedom, not blind growth.

We tend to think about growth as a good thing, but also a problem that has to be solved by scaling up. Your book argues that’s not a given. What questions would you recommend to a freelance site builder ask themselves that will help them decide if hiring employees is the right answer for their lifestyle and business or not?

I’m glad you asked because there are definitely some questions to ask yourself. And here’s the thing: regardless of what thought leaders online might tell you, success is so deeply personal. Meaning, it looks like different things for different people.

The point of Company of One is not to become anti-growth, but to simply question it. A company of one questions growth first, and then resists it if there’s a better, smarter way forward.

Before we get into the questions, I just wanted to share a few bits of research from studies about growth, because it’s not always beneficial for business and sometimes it’s downright harmful.

In 2012, the Startup Genome Project conducted a study where they analyzed more than 3,200 startups and found that 74 percent of those businesses failed – not because of competition or bad business plans, but because they scaled up too quickly. Growth, as a primary focus, is not only a bad business strategy but an entirely harmful one. By failing—as defined in the study—these startups had massive layoffs, closed shop completely, or sold off their business for pennies on the dollar. Putting growth over profit was their downfall.

When the Kauffman Foundation and Inc. Magazine did a follow-up study on a list of the 5,000 fastest-growing companies in America five to eight years later, they found that more than two-thirds of them were out of business, had undergone massive layoffs, or had been sold below their market value, confirming the findings of the Startup Genome Project. These companies weren’t able to become self-sustaining because they spent and grew based on where they thought their revenue would hit—or they grew based on venture capital injections of funds, not on actual revenue.

To what we should be asking ourselves if we want to truly question growth? I’d start with these:

  • Why do you want more growth? Answer this question three times, because the first answer or two could be just a story you’re telling yourself.
  • How much is enough? How will you know when you’ve reached enough? What will change when you reach enough?
  • Does this growth just serve your ego or is it beneficial in some way? If yes, in what way specifically?
  • How does bigger/more/growth serve or help your existing customers?
  • What are the maintenance costs of saying “yes” or starting/building X?
  • How does this affect your profit (not just your revenue)?
  • How does the affect your happiness?
  • How does it affect your responsibilities and how you wanna spend your day? Because growth can mean growing out of a job you actually love to do.

A tagline I saw for your new book that I loved was “small is the new big.” I’m wondering how that applies to products that decide to stay small. How can they feel big enough to pay to the customer without being over-stuffed with features?

My favorite software does just one thing. My favorite writing app is IAWriter which doesn’t even let you change the font, sizing or colors. Overcast is my favorite podcast player because it isn’t stuffed with features I don’t care about.

By focusing on a single way to solve an issue for a specific type of customer, software can get really, really, really good at solving it, since that’s the main focus.

With Fathom, we’ve seen a ton of initial success because we only show a handful of stats to people, instead of Google Analytics 100 pages of reports that each have 100 variations. Our software is simple, minimal, and does just what it needs to do. Some people like that enough to pay for it. That said, it’s not everyone, and that’s a good thing.

Trying to make software that caters to everyone and solves all their problems will leave you with really awful software that’s bloated, slow and hard to use. It’s why products like MS Word or even Photoshop are rapidly losing marketshare to more minimal products like IAWriter and Figma. Heck, even WP Engine doesn’t try to offer hosting to every type of business and server setup imaginable, they focus on WordPress and businesses that have money to spend on a reliable and great solution.

Bigger isn’t better, better is better. When we confuse this we end up with awful products. That’s why the tagline for the book calls “small the new big” – because we’re finally waking up to the idea that huge companies with bloated software aren’t the only successful way forward. And I’m pretty excited about that.

The post Small is the New Big: An Interview with Paul Jarvis appeared first on Torque.

]]>
https://torquemag.io/2018/11/small-is-the-new-big-an-interview-with-paul-jarvis/feed/ 2
Testing Nested Components In A React App https://torquemag.io/2018/11/testing-nested-components-in-a-react-app/ https://torquemag.io/2018/11/testing-nested-components-in-a-react-app/#comments Wed, 07 Nov 2018 20:32:19 +0000 https://torquemag.io/?p=85507 This post is part of a series on React development for WordPress developers. In my last post, I covered unit testing React components using Jest. Jest, when used for basic assertions and snapshot tests can cover a lot of the functionality of a React component. The “React Way” of developing interfaces calls for composing interfaces, which are themselves components, out of smaller components. Jest lets us test each component in isolation, but we’ll also need to make sure that component work as intended when nested inside of each other. This article also covers looping through React components — for example, […]

The post Testing Nested Components In A React App appeared first on Torque.

]]>
This post is part of a series on React development for WordPress developers. In my last post, I covered unit testing React components using Jest. Jest, when used for basic assertions and snapshot tests can cover a lot of the functionality of a React component.

The “React Way” of developing interfaces calls for composing interfaces, which are themselves components, out of smaller components. Jest lets us test each component in isolation, but we’ll also need to make sure that component work as intended when nested inside of each other.

This article also covers looping through React components — for example, a Posts component that renders an array of posts using a Post component for each one — using array iterators. In order to speed up development even further, I’ll cover how I use a command line utility to intelligently copy existing components to new components.

Testing Nested Components In A React App

As I wrote earlier, passing props down to multiple components is where React apps get tricky. If you go down to many layers, you end up with the “props drilling problem.”

Prop drilling (also called “threading”) refers to the process you have to go through to get data to parts of the React Component tree.

Kent C. Dodds

I should mention that the new React context API is an alternative approach to this problem. Context API is powerful, but I’d learn this way first, then watch Wes Bos’ video on the context API and think about which problems you have may be better solved with context API then “props drilling.” But for a few layers, this strategy is simple and prop-types and Jest can catch the problems it introduces.

Let’s create a component that loops through an array of posts and then use prop types to safely wire it to the existing Post component.

Using Generact To Copy A Component

At this point, we need a component that is almost the same as our existing one. One way to reduce the amount of repetitive typing we have to do is cut and paste the existing component and then do strategic find and replace. That’s boring and error-prone. Instead, let’s use Generact.

Generact is a module that does basically that, but is built with React components in mind. It’s really neat and saves a ton of time. Generact should be installed globally, which means you only have to do it once, per computer:

View the code on Gist.

Now, we can run the “generact” command in our app to copy any component:


In this screenshot of my IDE, you can see that I switched to the directory with Post, told Generact to copy that component and to put it one directory above. That creates the Posts directory, containing the component Posts and its tests. Neato.

One important note. The fact that it copied the tests is very useful. The fact that it copied the snapshots is not and will cause errors as Jest gets confused about file paths. I generally delete the snapshots folder and then let Jest recreate it. Those snapshots are invalid anyway.

Looping In A React Component

Now we have a start for our component, but it’s the same as the first one. This one should loop posts and pass them to Post. The first thing to do is update the propType for post to be “posts” and contain an array.

While I could use PropTypes.array to specify an array, that’s not what I really want. I do not want any array. I want an array containing objects with the shape of a WordPress post. So I used PropTypes.arrayOf. I could use this to specify, for example, an array containing only strings. In this case, I used the post shape I already had:

View the code on Gist.

This shows the PropType. Once I have my tests working and not before, I will add the loop. Jest lets me work iteratively. First make the component work, then add features. I can do iterate safely knowing what the effects of my changes are because I have the tests first.

I then modified the tests form the Post component to cover this component. In addition to changing from Posts to Post, I changed from a mock post object to a mock array of post objects:

View the code on Gist.

Now that my component works, lets add the loop. One major improvement from ES5 to ES6 and beyond is improved iterators for arrays. Array.forEach() and Array.map() make it easy to iterate over an array like php’s foreach control structure or jQuery’s each method.

The difference between map and forEach is that forEach doesn’t return a value for each iteration. Its useful for validating or mutating each or some items in a collection. On the other hand map, does return a value for each iteration. Therefore forEach is faster than map(), but map() can be used when we need to say return a React component for each item in an array.

That’s exactly what we need — iterate through each post and return a rendered Post component. Here is the loop:

View the code on Gist.

I zoomed in here to look at the loop as the “key” prop is a necessary, but not obvious step. Key is a special prop. React is designed to only update the DOM when needed. That gets tricky with loops.

How does React know if item 3 in the array changed? You may think that React analyzes the array for all of its deeply nested objects and compares them to the last time it saw that array. If you think this, you will be annoyed, like I was when I started learning React and changes in state didn’t re-render. Why? React is NOT comparing the deeply nested properties of an array. We use the “key” prop to signify to React that this is a unique item in the array. Using post ID in this context is great, as its a unique identifier, since that’s what it is.

Here is the whole component:

View the code on Gist.

The tests I already had failed because the snapshot changed when I added the loop. Again, I inspected the change, decided it was what i wanted and then accepted the new spec.

Reusing Prop Types

I know have two components with similar prop types. I have identical code doing very similar things in two places. That smells bad. I knew it was a problem when I did it, but I didn’t want to address it right away. Too many changes at once means test failures are not meaningful. But, this is a problem I want to fix.

Because I have tests in place, when I make the change, I will know that neither component has broken.

To accomplish this, I copied the shape to its own file:

View the code on Gist.

I can then use this as-is for my post prop in the Post component:

View the code on Gist.

I can also use it inside of PropTypes.arrayOf() for the posts prop in the Posts component:

View the code on Gist.

If you’re paying close attention, I added an ID prop to the postShape prop. My description of the post was not complete and is still not. Now, as I expand on it, I need to make a change in one place. By assigning the single responsibility of managing post shape to this constant, I have a maintainable way to change that shape.

What About TypeScript or Flow?

In my last post in this series, I covered setting up prop-types for prop type checking in React. I mentioned why often I think Flow or Typescript is not necessary because of this tool. I do think it’s worth noting that the last step I showed — creating a repeatable shape for a post — that’s basically re-creating TypeScript. TypeScript lets you define the shape — like PHP interface in OOP PHP — for objects.

I’m currently developing npm modules for to share React components between the web app and plugin. For the API client I did use Flow. Since I that module is not using React,  and all of my HTTP tests are using mock data, type checking was really important to me.

Since flow and TypeScript are compiled to JavaScript, the type checking happens then. Good for more rigid projects like API clients or CRUD. For UI, I like the simplicity of prop-types. Also, its one less thing for new developers to learn.

Testing Events In React Components

So far, we’ve only looked at components that render content, but not update it. What if we want a form? Let’s look at how that works. First, let’s talk about responsibilities. We want to keep this component “dumb” — the logic of updating state is not its concern. It will be passed props and a function to communicate a change in a value to its parent. How its parent works must remain irrelevant to the component.

Keeping your change handlers decoupled is super important for reuse. If your component could get used in a small app using one component to manage state, with Redux or inside of a Gutenberg block, you need to keep the component decoupled from those three systems. Again, the principle of dependency injection applies. The change handler is a dependency from another part of the program, so we pass it into the component.

To begin, I used Generact, the same way as last time, to create a copy of the Post component and called it PostEditor. I then added one new prop-type – onChange.

View the code on Gist.

I used the prop-type for a function and made it required. My snapshot tests immediately created errors as they did not have this prop. To fix this, I updated the tests so the onChange prop was supplied a function that does literally nothing.

View the code on Gist.

These unit tests now prove that the component renders properly when provided the correct components. That’s god, but I would not trust that this means the component can edit a post. We need to test it by simulating the change event on the inputs of the editor.

Form Inputs And Change Handlers For Functional React Components

In my opinion, a component is responsible for shaping the value it passes to its parent via the change handler function. I do not want to have to create a change handler for the title and the content and the author and the taxonomies of the post. That’s too much. I want to pass in the post and a function to call when the post updates.

This means that the component is responsible for taking the event object, extracting its value, and merging that with the original post. That’s a few concerns. This is not where I switch to a class. Yes, I need a few closures, I can do that here.

First, here is our change handler function — internal to the component — that is responsible for merging the update and the existing values:

View the code on Gist.

The PostEdit component started as copy of Post. Let’s update it to have an input for the post title, the same rules of HTML inputs apply here, including passing a function to be called on the inputs change event to the onChange attribute.

Here’s the input:

View the code on Gist.

Here I use a closure to take the event, get its value and create the input for my internal change handler. Now here is the whole component:

View the code on Gist.

This approach is pretty simple and it totally isolates the concern of updating the object of the post inside of this component and then passes all of that out via the changeHandler. That process is black-boxed, but still testable and I don’t have to deal with binding state in a class component and its still simple to read.

Because a JavaScript class is syntactical sugar and not browser-friendly, this is very similar to what Babel would compile given a class extending React.Component. I do not have anyway to use React lifecycle events. That’s not a problem in the slightest and I have not yet needed that extra complexity because these are simple “dumb” components that are unaware of where their data comes from.

You could add more inputs to this component, following the same pattern. That’s the idea, you should try it. Show us what you created with a pull request to the example plugin.

The post Testing Nested Components In A React App appeared first on Torque.

]]>
https://torquemag.io/2018/11/testing-nested-components-in-a-react-app/feed/ 1
Getting Started With React Unit Testing For WordPress Development https://torquemag.io/2018/10/getting-started-with-react-unit-testing-for-wordpress-development/ https://torquemag.io/2018/10/getting-started-with-react-unit-testing-for-wordpress-development/#comments Wed, 24 Oct 2018 15:34:54 +0000 https://torquemag.io/?p=85380 When I first looked at Vue vs React, I chose VueJS. One of the reasons was that I felt like Vue was a better choice was the complexity of React classes and life-cycle events. I felt like that was a lot of extra complication that would help with developing frameworks, but preferred the simplicity of Vue’s HTML-like templates and Angular-like two-way data-bindings. As working with Gutenberg has caused me to readdress React, I’ve found that React can, in many ways be a lot simpler, because I can stick to small, pure functions for most of my components. One thing I […]

The post Getting Started With React Unit Testing For WordPress Development appeared first on Torque.

]]>
When I first looked at Vue vs React, I chose VueJS. One of the reasons was that I felt like Vue was a better choice was the complexity of React classes and life-cycle events. I felt like that was a lot of extra complication that would help with developing frameworks, but preferred the simplicity of Vue’s HTML-like templates and Angular-like two-way data-bindings.

As working with Gutenberg has caused me to readdress React, I’ve found that React can, in many ways be a lot simpler, because I can stick to small, pure functions for most of my components.

One thing I love about React is how easy it is to test and refactor when you follow the single responsibility of concerns principle. This series of post is on a meta-level about avoiding coupling and cohesion when writing code. I’m using React components as a practical example, so you learn React. My friend Carl wrote a really great post about how cohesion and strong coupling happen in WordPress PHP projects and beyond.

Becoming Test-Driven

In my previous posts in this series, I went over React basics. In the rest of this series, I’m going to cover using React with WordPress, starting with the WordPress REST API and then React. I’ll be walking through creating a small application that displays and edits posts from the WordPress REST API. I’m not actually going to connect it to the WordPress REST API. This is important. I used to start by getting my API requests working and then write JavaScript to display it. This is completely backward in my mind and prevents decoupling the WordPress front-end from the server.

By starting with test data, I am forced to develop code with near or complete code coverage. I also think it’s much faster than cowboy coding against a live API and having to refresh a browser all the time. The phase of front-end development I’m teaching here — creating HTML with JavaScript as opposed to confirming that HTML to a design — does not require a web browser.

Yes, end to end testing with a headless browser may be useful, though I do not use it. But my point is that relying on what the browser looks like is not only a poor test. Using tests instead of the browser forces a pattern of development that I find to be faster and more maintainable.

By developing in this fashion, I have increased my output of code significantly and I have a lot more confidence in what I am writing. Also, knowing that I have to live with this code long-term, it’s important to me to have Coveralls and Code Climate in place to enforce these standards and measure improvement over time.

ES6 Classes

If you started in PHP, like I did, which is not an object-oriented language by default, you may conflate classes with object-oriented programming. This is a false distinction. For example, in JavaScript, we can treat primitive data types — strings, integers, arrays — etc. as objects and even extend their functionality by modifying their prototype.

This is Prototypal Inheritance, which is different from how we extend classes, via classical inheritance, in PHP by overriding methods.

JavaScript does not have “methods” in the form that class-based languages define them. In JavaScript, any function can be added to an object in the form of a property. An inherited function acts just as any other property, including property shadowing as shown above (in this case, a form of method overriding).

Mozilla Developer Network

Personally, I find Prototypal Inheritance a lot harder to understand than classical inheritance. EcmaScript 6 introduced classes into JavaScript as syntactic sugar.

In computer science, syntactic sugar is syntax within a programming language that is designed to make things easier to read or to express. It makes the language “sweeter” for human use: things can be expressed more clearly, more concisely, or in an alternative style that some may prefer.

Wikipedia, Emphasis mine.

I was super-geeked when classes became available in JavaScript, but having spent more time with them, especially developing React components, I feel like they should be used sparingly.

Pure Functions And Testability

I’m not arguing that you should never extend React.Component, it’s useful, but I always default to small, functional components. Why? They are pure functions — arguments go in, output comes out, no side effects. Pure functions are easy to test. They do not have side effects by default, which is a condition for true unit testing.

Here is a function, which is not pure, that modifies the title of a post in a collection:

View the code on Gist.

I say this is not pure as it modifies the variable post, which is not in the scope it includes. Therefore, the modification of the posts variable is a side-effect of this function. If I was to write a test for this function, I’d have to mock the global, which is fine, but does that prove anything, given that the test covers elements outside of its control? Sort of.

Let’s refactor to a pure function that is testable. We need to modify an array of posts. By injecting that array into the function, we go from modifying an array of posts to modifying the passed array of post.

View the code on Gist.

I just applied the principle of dependency injection so that the function can be isolated.

In software engineering, dependency injection is a technique whereby one object (or static method) supplies the dependencies of another object. A dependency is an object that can be used (a service). An injection is the passing of a dependency to a dependent object (a client) that would use it.

Dependency injection – Wikipedia

One key benefit of dependency injection is that we can inject mock data when testing. Instead of mocking a global and hoping that’s accurate, we are testing the function exactly the way it actually is used.

View the code on Gist.

Yes, we can test class methods as well. I’ll get to that, but it’s more complicated. I’ll get to when that complication is worth it, but let’s look at when its not first.

A Quick Intro To Unit Testing React Components With Jest

While there are a lot of options for testing JavaScript apps, Jest, is built by Facebook with React in mind. Even if you’ve never written JavaScript tests before, I think it’s worth learning testing right away. If you know how to write a JavaScript function, you can write a test with Jest.

Let’s walk through setting up tests and writing your first tests. As this post goes on, we’ll add more features to the example app, using tests to guide the development. This will allow us to start small, and build complexity one layer at a time, with our tests making sure nothing falls apart in the process.

Getting Tests Running

First, let’s create a new React app, with a dev server, and everything we need to run tests. Coming from a WordPress background, that sounds hard, but like a developer-friendly framework should, this is easy with React. Seriously, its three commands if you have Node, npm and yarn installed.

View the code on Gist.

Once that’s complete, your terminal should show you the URL for your local dev server. Here is a screenshot of my terminal and browser with the default app.

By default, create-react-app adds one test. Open up another terminal, in the same directory and start the test watcher:

View the code on Gist.

The tests are being run using Jest. create-react-app assumes you have Jest installed globally. If that command does not work, do not install Jest or jest-cli into your project. Instead, install globally using npm:

View the code on Gist.

The terminal should look like this:

That’s one test, which basically covers if mounting the app causes an error or not. That’s a good catch-all acceptance test, but not the kind of isolated unit test we want and running it shows us if we have tests running or not.

Jest has a pretty simple API. Let’s look at one test suite, with one test, before jumping into something more practical.

View the code on Gist.

Test suites are defined by the function described. Everything inside of its closure is considered part of one test suite. Organizing tests into test suites makes them easier to read and you can skip a whole suite or add a specific setup or teardown function to the suite.

Inside of a test suite, we use the function it() to isolate one test. Both describe and it accepts to arguments. The first is a string describing the test suite or test, the second is a function that performs the test. Think about the metaphor this way please: “describe a group of features, it has one specific feature.”

Inside of the test, Jest gives us the expected function. We provide expect with the result of the function being tested and then make an assertion. In this case, we’re using the toEqual assertion. We are asserting that the input we expect to equal a value, does equal that value.

Since pure functions have one output and no side-effects, this is a simple way to test.

Iterate Safely

That’s the basics of Jest. With Jest, you can run basic tests as I’ve shown in this post. As this series continues, I’ll cover snapshot testing React components with Jest and the React test renderer. For more complex testing, I’ll introduce Enzyme.

Specific technologies aside, keep in mind WHY we have tested: so we can iterate on our code safely. When we make a change to our codebase to fix a bug or add a new feature, we need to know that no new bugs or other unintended side effects are introduced as well. Testing gives us that assurance. By following a test-driven approach, you can confidently add new features to components and apps, allowing for an iterative approach to application interface development.

The post Getting Started With React Unit Testing For WordPress Development appeared first on Torque.

]]>
https://torquemag.io/2018/10/getting-started-with-react-unit-testing-for-wordpress-development/feed/ 3