iframes and downloads with OAuth/REST

The OAuth / REST approach to web clients is a common and clean approach, and works beautifully with the traditional request/response cycles of retrieving and posting JSON objects. As the application grows, however, inadvertently requirements pop up to download files, or to display some content in a separate window context (eg. iframe). As the OAuth approach uses an Authorization header as default means of authentication,  at this point the developer is left to find a “workaround”.

There are several stackoverflow posts covering the topic (eg. this one), but I’d like to show what we’ve decided to do (and what not to do).

Possible designs

  1. Add the JWT/access token to the URL, as ‘access_token’ GET parameter. Spring Security will pick this up out-of-the-box.
  2. Use XHR to download content (setting XMLHttpRequest.responseType property to ‘blob’)
  3. Use a form POST providing a _blank target and passing the access_token as POST parameter
  4. Set a temporary ‘access_token’ cookie, and remove it once the download has finished

And here’s why we chose to go with option (3):

  1. The resulting URLs are stored in history and accessible in access logs. Subsequent calls might even pass along the URL as referer. OAuth clearly states that access_token should never be passed as GET parameter
  2. This loads the entire file into memory, and circumvents the browser download support (Downloads folder, progress bar etc.)
  3. looks good
  4. Opens the possibility of CSRF while the cookie is active, and has the potential of a security hole if the cookie is not (always) cleaned up

Using form POST


For downloads, this approach is fairly simple. Instead of providing a classic anchor with target=_blank, we implement a click handler with uses a singleton, hidden form

<div style="display: none">
<form #formRef target="_blank" method="POST" action="">
    <input #tokenRef type="hidden" name="access_token">

and populate the access_token and action in the click handler. The actual implementation is not too relevant here, the main point is to achieve download by posting to a target _blank .


iframes don’t feature very often in modern SPA, but we’ve found use for them when displaying third-party e-mail content. The key approach is described in this stackoverflow post. Instead of setting the iframe#src to your content URL:

  1. Download content using a normal XHR GET (this is no different from what the iframe would do on setting the src attribute, with the key difference that it happens in the authenticated context of the current window).
  2. Use URL.createObjectURL to create a local BLOB URL
  3. Point the iframe src to the URL created in (2)
  4. Use URL.revokeObjectURL to clean up the local BLOB once it’s not needed anymore


With the approach described above, we never pass our access token as GET parameter, nor do ever set a cookie. The access token is present in the form used to POST the download request, but that can be limited to the time between the user clicking the download button/link (set access token on form) and submitting the form (after which the token can be unset). iframe handling requires some extra work, but there is no network overhead, and buggy cleanup code “only” results in a memory leak, but not a security issue.


In the process of developing funnel.travel, a corporate post-booking travel management tool, I’m sharing some hopefully useful insights into Angular 6, Spring Boot, jOOQ, or any other technology we’ll be using.


Connecting REST APIs

As part of the development process, there are some pain points in the area of tool integration, mainly procedural rules which must be followed manually

  • Whenever you commit a change, label the according JIRA ticket with “code-change” (or set FixVersion, or transition to “Under development”)
  • When you start reviewing / explorative testing a ticket, head over to the test server and make sure the ticket was actually deployed there (or monitor the deployment process and manually move your ticket to “Test” once deployed)
  • At least once a day, check if (or how many) tickets are in “Test” status and process them accordingly.

In the past weeks we’ve developed a product called Isthmus which does the above (and then some) for us. It’s a tool for developers, so there’s no setup wizard or other fancy stuff. Just a UI to configure the YML (can also be text edited directly).

The surprising part is that Isthmus is much more powerful than intended. It can actually connect any two APIs, ie. APIs with basic authentication. We haven’t come across anything we need with OAuth, but if or when we’d want to POST to Twitter or such, we’ll extend Isthmus.

If you or your team has pain points similar to the ones mentioned above, maybe you’d like to head over to the Isthmus page and see it our tool can make your process smoother.


In the process of developing funnel.travel, a corporate post-booking travel management tool, I’m sharing some hopefully useful insights into Angular 4, Spring Boot, jOOQ, or any other technology we’ll be using.


Reloadable property file

There are still applications out there which require a restart after changing user settings, but more commonly the user settings are being observed by a “reloadable property configuration”. If the user edits the settings file, the application will notice and reload the settings.

Tasked with implementing such a feature for a fairly straight-forward YAML file, I came across Apache Commons Configuration2. It is super-powerful, with stuff like JNDI, JDBC, XML, .properties format etc. Of which which we don’t need a single one. There’s a section on reloading, with configurable strategies, serialization managers and whatnot.

What we’re looking for:

  • Simply de-/serialization between YAML and Java bean
  • allows to serialize/store settings from Java bean to YAML, ie. the user edits settings within the application
  • allows to reload settings when the user edits the settings file directly

Luckily Jackson now provides a YAML extension


and Java NIO has the very handy java.nio.file.WatchService, so we come up with

 * Created on 20 Jul 2018
package ch.want.demos;

import java.io.File;
import java.io.IOException;
import java.nio.file.FileSystems;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.StandardWatchEventKinds;
import java.nio.file.WatchEvent;
import java.nio.file.WatchKey;
import java.nio.file.WatchService;

import javax.annotation.PostConstruct;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;

import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.dataformat.yaml.YAMLFactory;

public class UserPropertiesManager {

    private static final Logger LOG = LoggerFactory.getLogger(UserPropertiesManager.class);
    private UserProperties userProperties; // this is our configuration Java bean
    private final ObjectMapper mapper;
    private File propertiesFile;

    public UserPropertiesManager() {
        mapper = new ObjectMapper(new YAMLFactory());

    public void init() {

    private void initPropertyFileReference() {
        propertiesFile = new File("config/settings.yml");

    private void readPropertiesFromFile() {
        LOG.debug("Reading configuration from {}", propertiesFile);
        try {
            final UserProperties newProperties = mapper.readValue(propertiesFile, UserProperties.class);
            new Thread(new ConfigurationEditWatcher()).start();
        } catch (final IOException e) {
            LOG.error(e.getMessage(), e);

    public void writePropertiesToFile() {
        LOG.debug("Storing configuration to {}", propertiesFile);
        try {
            mapper.writeValue(propertiesFile, userProperties);
        } catch (final IOException e) {
            LOG.error(e.getMessage(), e);

    private class ConfigurationEditWatcher implements Runnable {

        private final WatchService watcher;
        private final Path configDir;

        ConfigurationEditWatcher() throws IOException {
            watcher = FileSystems.getDefault().newWatchService();
            configDir = Paths.get(propertiesFile.getParentFile().toURI());
            configDir.register(watcher, StandardWatchEventKinds.ENTRY_MODIFY);

        public void run() {
            try {
                WatchKey key;
                while ((key = watcher.take()) != null) {
            } catch (final InterruptedException | IOException ex) { // NOSONAR
                LOG.info("Terminating WatchService on configuration file");

        private void processWatchKey(final WatchKey key) throws IOException {
            for (final WatchEvent event : key.pollEvents()) {
                if (event.kind() == StandardWatchEventKinds.ENTRY_MODIFY) {
                    processWatchEvent((WatchEvent) event);

        private void processWatchEvent(final WatchEvent pathEvent) throws IOException {
            final Path filename = pathEvent.context();
            final Path modifiedFile = configDir.resolve(filename);
            if (Files.isSameFile(modifiedFile, propertiesFile.toPath())) {

All we wanted in one simple class. Hope this helps whoever’s reading this.

In the process of developing funnel.travel, a corporate post-booking travel management tool, I’m sharing some hopefully useful insights into Angular 4, Spring Boot, jOOQ, or any other technology we’ll be using.

Test pyramids are history

Fairly often I hear a referral to the test pyramid (Mike Cohn), and sometimes in quite a reverential way. In this blog post I’d like to advocate a less dogmatic approach to test structure. (Readers who don’t know what a test pyramid is might be dropping off here, that’s alright).

Let’s say you’re developing a web application to manage corporate travel data. One business process will be “the corporation has hired a new sales woman, and she needs to be registered in the system”. The test pyramid would suggest that there might be a handful of e2e tests, several dozen integration tests (checking eg. that our web application understands updates from SAP), and maybe a hundred or more unit tests.

There are lots of blogs about why it’s good practice to write tests, so I’ll take that as a given. (Readers who don’t write tests might be dropping off here, that’s alright). One point I find is often not emphasized enough is that the most fundamental purpose of the entire test suite is to ensure your software works. With that I don’t mean the SAP interface, nor the destination caching, nor that pesky VAT calculation for domestic Swiss flights. I mean that when the corporate customer using funnel.travel hires a new sales woman, the system administrator can hook up her user account, assign it to her designated travel assistant, and set an annual travel expense budget.

At the top of the test pyramid, thus, we will write one single test, which covers this entire business process. I see such a test as above e2e, we call them epic tests. This single test provides the confidence that the business process works (and if it is the only breaking test, one of the first steps is to add a unit or integration test catching the bug).

Being aware of exceptional cases, we’ll add some more tests at the e2e level to make sure we get proper system behavior for missing input data, duplicate email addresses and such.

Quite likely, during development we came across a few glitches, or just wanted to quickly ensure that the new and tricky stream filter worked ok even with entries from different time zones. For those things we wrote unit tests.

I’d like to point out what we didn’t do:

  •  we didn’t assess the number of e2e tests, and from that deducted a number of unit tests which must be met
  • we didn’t assume that a set of passing e2e tests each covering small steps ensure that an entire business process works

From the tests mentioned above, the epic tests gives us the confidence a business process works. Further e2e tests (and maybe some integration tests) give us the confidence our system will handle exceptional cases as designed. Unit tests helped us develop (and document) our components.

It is very likely that writing appropriate tests will result in a lot of unit tests, a smaller number of integration tests, and an even smaller number of e2e tests. But it’s important to focus on the value each test is providing. If you have thousands of unit tests, but they’re infested with mocks creating a test context completely disconnected from real-life, those tests don’t add any value. They might even hurt by giving a false sense of confidence while actually only proving that your system can handle data where most properties are null and collections are empty.

If you hear that there should be more tests of a certain kind, and the only justification is the test pyramid, there’s a good chance that project has a vast number of tests, and yet defects keep pouring in. Might be time to revise your test strategy. (Some readers might drop off here because they need to attend the “Increase productivity” workshop, that’s alright).

Side note: I believe that test structures are very different between applications and framework libraries. For the former, the “user” is a person who will click a button and expect a response. The most important test here is one which clicks a button and asserts the response. For the latter, the “user” is another developer calling a public method. I feel often application developers adopt best practices of framework developers. With sub par results.

In the process of developing funnel.travel, a corporate post-booking travel management tool, I’m sharing some hopefully useful insights into Angular 4, Spring Boot, jOOQ, or any other technology we’ll be using.

We can’t reproduce that on our test system

As soon as a web application makes it to production (and actually draws some attention), the developers are confronted with support cases which are coupled with specific data constellations. The entire test pipeline might be successful, but when a user dares to attempt a checkout without any items in the shopping cart, they get an ArrayIndexOutOfBoundsException. The next support case might not be that easy, because when one single user attempts to change her address, she gets a NullPointerException at a code line where there simply cannot be a null-reference.

“We apologize, but we cannot reproduce this system behavior.” is a nice way of saying that the development team does not have the means to take a data set from production, and run it in a test / development environment. Such an export / import functionality is a crucial part to providing first-class support (and continuously improving the web application).

As funnel.travel uses jOOQ, implementing such an export / import was straight-forward. Our export includes all master data (organization structure, vendors etc.) for a given account. We chose to simply export ‘all’ as that data set is never huge, and the added complexity of choosing what is needed doesn’t outweigh the benefit of a smaller export package. (If you’re Amazon, you probably wouldn’t want to export your entire product catalog, though).

final Writer streamWriter = new OutputStreamWriter(stream, Charset.forName("UTF-8"));

    .formatJSON(streamWriter, JSON_FORMAT)

A caveat here was the required Writer. When streaming to a ByteArrayOutputStream, jOOQ will use the JVM default encoding, while JSON RFC standard asks for UTF-8. Thus the JSON produced via BAOS was actually invalid and could later not be parsed using Jackson.

Also noteworthy is that the JSON format must contain ‘headers’, as these will be used later for the import. using a

new JSONFormat().header(false)

will produce a syntactically valid JSON, but the import will subsequently fail with obscure exceptions.

A second issue with jOOQ JSON handling are custom data types (see this Google Group thread). formatJSON() will use toString() on a custom type, while later the loadJSON() will attempt to load the value as database type. If you’re mapping an enum to an ordinal (eg TINYINT on the database):

enum FOOBAR → formatJSON (using FOOBAR#toString()) → JSON contains ‘FOOBAR’ → loadJSON() → fails to convert ‘FOOBAR’ to a TINYINT (value is set to null)

We’ve written an intermediate workaround to convert the custom data type to the database type (see our comment on the aforementioned group thread), making use of a ‘CustomDatatypeProcessor’. Thus, importing JSON back into the database becomes:

final CustomDatatypeProcessor processor = new CustomDatatypeProcessor<>(OrganizationUnit.ORGANIZATION_UNIT);

final Loader loader = jooq.loadInto(OrganizationUnit.ORGANIZATION_UNIT)//         


Note the incredibly useful “onDuplicateKeyUpdate” option there. If the account has been imported into our test system earlier, we want to be able to import a different trip, and thereby updating the account’s master data. The processor’s “parse” method does the custom data type magic described earlier. The “fields” method returns the fields in the order contained in the JSON (whereas the default loadJSON() expects fields in the order of the current table schema).

Now that we have the data set on our test system, we can debug that NPE when changing the address, and eventually find that in this case the call to the account’s CRM results in a response where a filed is null, despite specs describing the field a @NotNull. Specs adapted, new test to cover that use case, code fixed, CI deploys, and user can change her address.

Bottom line: while there were a few bumps, we nevertheless were able to implement an export / import functionality in funnel.travel within 2 days, largely thanks to the awesome formatJSON() / loadJSON() of jOOQ, and will thus be able to debug using a subset of actual productive data. And the need for that will arise, I’m sure.


In the process of developing funnel.travel, a corporate post-booking travel management tool, I’m sharing some hopefully useful insights into Angular 4, Spring Boot, jOOQ, or any other technology we’ll be using.

Handling boilerplate forms

For a change, this post doesn’t deal with specific code, but describes our approach to dealing with the numerous but simple edit forms that go with most enterprise web-applications.

Most business applications require a set of data, often referred to as “master data” or “static data”, which is merely a supporting cast. The true value of the business application revolves around other data structures. As an example, a financial investment tool will boast about structures like assets and portfolios. It will most likely fail to mention that the tool also manages a list of currencies, zip codes and BIC/SWIFT codes.

Usually the business application will need to allow for managing the supporting cast, but these forms should require only little effort to build, and then be low-maintenance. As the data structures are often rather simple, this calls for a generic approach.

Ad-hoc form configuration

In a web-application I developed in early 2016 using basic jQuery, Backbone.js and Handlebars.js, I ended up using the early-stage backbone-forms. The server provided a JS containing the model/form configuration, and backbone-forms dynamically created the form.

At the time the decision to have the server provide the form configuration (albeit cached client-side) was based on the benefit of deriving the configuration directly from annotations on the model. Thus adding a new persisted field and annotating the getter resulted in the field showing up on the form.

However, in a TypeScript / AOT world, this approach is not feasible.

Code generation

In funnel.travel, we still want a dynamic form creation for simple edit forms. Todd Motto has written an excellent blog post which served as a starting point. We added an additional form autocomplete component based on the blog post  by Jeff Delaney (albeit replacing anything ‘Firebase’ with standard REST queries). Another puzzle piece was to add i18n to the dynamic form.

As of writing this post, Angular 5 is still not able to use translation strings outside a template, which means the configured control labels and placeholders cannot be translated using standard Angular 5 i18n.

Our current solution has two main components.

  1. Code generation (run whenever the models change)
  2. Dynamic Angular forms

The code generation will read model data, which is annotated with Spring roles allowed to view and change each property.

@FormPropertyAccess(granted = { UserSecurityRole.SYSTEM_ADMIN, UserSecurityRole.COMMUNITY_ADMIN, UserSecurityRole.CLIENT_ADMIN })
public Userlogin setArranger(final Boolean arranger) {
    return super.setArranger(arranger);

The code generator then creates

  • A ‘codekeys.component.html’ holding all dynamic i18n keys, used by our translator service
  • An Angular model class (class Userlogin)
  • A constant instance of an empty instance of said model class (emptyUserlogin: Userlogin)
  • A form configuration listing all properties
properties = {
  email: {
    type: 'string',
    formControl: 'text',
    default: null,
    readAccess: [],
    label: 'userlogin.email',
    optional: false
  arranger: {
    type: 'boolean',
    formControl: 'slidetoggle',
    default: null,
    label: 'userlogin.arranger',
    optional: true

Dynamic forms

We’ve decided to make a few parts of the funnel.travel source code public, hosted on github. Our implementation of the “form-autocomplete” control can be found there, as an addition to and based on the blog posts mentioned above. Also, our translator service can be found there.

We haven’t done any layout work, so I’ll refrain from posting any pictures of our dynamic forms, but they are fully functional, and with only a few lines of change + running our code generation, we can:

  • add/remove model properties
  • change authorization to read/write a property

If you’d like more information, just drop us a line.


In the process of developing funnel.travel, a corporate post-booking travel management tool, I’m sharing some hopefully useful insights into Angular 4, Spring Boot, jOOQ, or any other technology we’ll be using.

Create a global error component for Angular 4

I quite like the material design in Angular (using @angular/material 2.0.0-beta.12). One component I’m missing is an error component which is global for an entire form (or any other kind of user interaction). Some errors cannot be tied to a specific input field, such errors should be displayed on the form but independent of any individual input control. (eg. “Data on this form has been changed in the meantime. Submit again to overwrite, or refresh“, or “A customer with identical name has already been registered for the same address.“)

In this post I’ll show how we’ve implemented a custom Angular component to that purpose.


Given: a global error handler

In our CoreModule, we’ve defined a

providers: [ { provide: ErrorHandler, useClass: GlobalErrorHandler } ]

The GlobalErrorHandler extends ErrorHandler, and stores an error context .

import { ErrorHandler, Injectable, Injector } from '@angular/core';
import { LocationStrategy, PathLocationStrategy } from '@angular/common';

import { Subject } from 'rxjs/Subject';

import { ErrorContext } from './errorcontext.interface';

export class GlobalErrorHandler extends ErrorHandler {

  private emptyErrorContext: ErrorContext = {
   message: '',
   location: ''

  private errorContextSubject = new Subject<ErrorContext>();
  errorContext$ = this.errorContextSubject.asObservable();

   * Since error handling is really important it needs to be loaded first,
   * thus making it not possible to use dependency injection in the constructor
   * to get other services such as the error handle API service to send the server
   * our error details
   * @param injector
  constructor( private injector: Injector ) {
    super( false );
    this.errorContextSubject.next( this.emptyErrorContext );

  handleError( error ) {
    const locationStrategy = this.injector.get( LocationStrategy );
    let nextErrorContext: ErrorContext = {
      message: error.message ? error.message : error.toString(),
      location: locationStrategy instanceof PathLocationStrategy ? locationStrategy.path() : ''
    this.errorContextSubject.next( nextErrorContext );
    super.handleError( error );

The reason we’re using ‘extends’ instead of ‘implements’ is that it allows us to pass on the error to super, which eventually calls the Error.prototype.handleError().

Some notes where we got sidetracked:

  • Compiler warning about exports not being found are discussed here, the solution was to separate the ErrorContext interface into a separate file
  • Some stackoverflow posts state “When applying this on the root module, all children modules will receive the same error handler (unless they have another one specified in their provider list).“, which is misleading. Providing the GlobalErrorHandler on the CoreModule exposes it to sibling modules, not just children.


The error component

The global form error HTML (formerror.component.html) is pretty straight-forward

The component hooks up with the GlobalErrorHandler described above, and subscribes to changes in the error context.

@Component( {
 selector: 'mymat-form-error',
 templateUrl: './formerror.component.html'
} )
export class FormerrorComponent implements OnInit, OnDestroy {

  errorContextState$: Observable<ErrorContext>;
  errorContext: ErrorContext;
  private errorContextSubscription: Subscription;

  constructor( private errorHandler: ErrorHandler, private cdRef: ChangeDetectorRef ) {
    const defaultError: ErrorContext = {
      message: '',
      location: ''
    this.errorContext = defaultError;
    if ( this.errorHandler instanceof GlobalErrorHandler ) {
      this.errorContextState$ = this.errorHandler.errorContext$;

  ngOnInit() {
    if ( this.errorContextState$ ) {
      this.errorContextSubscription = this.errorContextState$
      .subscribe( data => {
        this.errorContext = data;
      } );

  ngOnDestroy() {
    if ( this.errorContextSubscription ) {

There were two main issues:

Injecting an ErrorHandler

Trying to inject the custom GlobalErrorHandler instead of the base class resulted in a

Error: Uncaught (in promise): Error: No provider for GlobalErrorHandler!

I suspect that at the time of injection analysis, the error handler isn’t yet instantiated.  Debugging never shows a c’tor with anything other than the GlobalErrorHandler, so the error must occur before actual component instantiation

Component UI update

Without the call to ChangeDetectorRef#detectChanges(), the UI was not updated when eg. an HTTP POST resulted in an error. Whenever Angular (zone) detected a change and eventually updated the UI, the error component showed the previous error. I don’t yet understand why change detection doesn’t get this instance, but there seem to be rather a few similar issues out there (eg. this one in combination with redux)



In the process of developing funnel.travel, a corporate post-booking travel management tool, I’m sharing some hopefully useful insights into Angular 4, Spring Boot, jOOQ, or any other technology we’ll be using.


Running Karma with Maven on Jenkins CI

Most tutorials on automating test for an Angular SPA are based on Jenkins running on the development machine, and executing Karma through a Jenkins “Execute shell”. We have our SPA set up as part of a multi-module Maven project (as the server-side is a Spring Boot application).

Thus here the step-by-step guide for a CI server using Ubuntu 16.04.3 LTS and Maven build. As prerequisites, I’ll assume Jenkins CI is already up and running. If not, follow the guide here. Also, I assume you’ve installed Karma and Jasmine in your local project. (Weird, though, because if you haven’t done that already you probably don’t have any tests to run anyway.)

Node.js / npm on the Jenkins CI server

Node.js packages can be platform-specific (eg. PhantomJS). Therefore:

  • Do not commit /node_modules/ to the GIT repository
  • Setup Node.js / npm on the Jenkins CI server. Also, provide @angular/cli as global package.
sudo apt-get install nodejs 
sudo apt-get install npm
sudo npm install -g @angular/cli@latest --unsafe-perm

(I needed the unsafe-perm flag, without it the install was stuck in an endless loop due to some access denied error.)

fyi: There is a NodeJS plugin in Jenkins CI which could be used as an alternative to installing nodejs and npm manually.

Jenkins job

Assuming a standard Maven job in Jenkins, the additional configuration needed:

A “Pre Build – Execute shell” step (where ‘webclient’ is the Maven module name of our Angular SPA)

rm -R $WORKSPACE/webclient/node_modules;

(Edit: previously, and for reasons I cannot recall, I also added a ‘rm -f $WORKSPACE/webclient/package-lock.json;’. I just ran a build without, and had no problems, so I removed that line)


With our pom.xml, we want to achieve several things:

  1. Run a “npm install” before running tests
  2. Run karma
  3. Report tests results to Jenkins

In order to run “npm install” prior to tests, I use “exec-maven-plugin“, with

  <id>npm install</id>

To execute karma, use the same plugin with a different execution:


Finally, to get the test results properly display in Jenkins, I use the Jenkins reports directory property (see bottom of this page):


The ‘karmaTest’ there matches the execution/id of the “ng test” execution above.


The karma.conf.js has (irrelevant properties are not included)

frameworks: ['jasmine', '@angular/cli'],
plugins: [
junitReporter : {
  // results will be saved as $outputDir/$browserName.xml
  outputDir : 'target/karma-reports/'
reporters: ['junit'],
autoWatch: false,
browsers: ['PhantomJS'],
singleRun: true

Note how the ‘outputDir’ matches the ‘reportsDirectory’ defined in the pom.xml above.


With this setup, we now have automated testing using Karma on Jenkins CI.

Using OAuth2 with Angular SPA

There are quite a few stackoverflow questions out there asking how to secure an OAuth2 client ID + secret in a pure-Angular SPA (eg. here, here or here) A lot of the answers eventually aim at changing the givens:

  • pure client-side Angular application
  • use OAuth2
  • want to secure client ID + secret

by suggesting to encrypt the client ID, or maybe add a server-part.

From what I’ve learned so far, the short answer is: it cannot be done, and you’re probably asking the wrong question.

OAuth2 links client IDs with certain privileges, eg. what grant flows are allowed. Separating an application into server-side and client-side SPA is a classic example of where a Password Credentials grant flow actually makes sense. However, you don’t want to allow any client that grant flow, just your own. But why?

The restriction does not aim at preventing brute-force password attacks. Any attacker can run that directly against the server component, getting the required information from the server redirect page during a normal Authorization Code grant flow. The restriction does aim at preventing another client at posing as fully trusted “part of our application” client, and thus getting users to provide username/password (read: phishing). However, this would not be a cURL request, but would have to originate from a browser.

My suggestion in this scenario is to map the Origin (HTTP Header) to an internal client ID on the server side. Of course the Origin header can be forged, but then – again – we’re talking about forged attacks, and not phishing. The latter would run in a normal browser, sending the normal Origin header (which is controlled by the browser, and cannot be spoofed within the browser).

Technology stack growing pains

The basic technology stack of the funnel.travel server and web client is pretty much defined. The server will be running on a Java app server, the code is based on the Spring framework, jOOQ and an underlying PostgreSQL DBMS. Mainstream stuff, really. The client will be a stand-alone Angular 4 client, with Material design. A tad unexpected are the basic issues of getting a simple login to work.


Cross-origin resource sharing becomes an issue when separating the server part from the web client. In a traditional Java stack, the entire application would be served (locally) on localhost:8080. But now, the server is running on localhost:8080, while the ng client is on localhost:4200.

The key point is to enable CORS on the WebSecurityConfigurerAdapter subclass

protected void configure(final HttpSecurity http) throws Exception {

Make sure the CorsConfiguration allows the OPTIONS method, as NG uses that as a pre-flight check on cross-origin HTTP requests.


Once the OPTIONS request is processed ok, the next issue is “Could not verify the provided CSRF token because your session was not found.”. A surprising number of Stackoverflow answers will suggest to simply disable Cross-site request forgery checks in Spring. But we’re developing a brand-new application, and disregarding CSRF seems like the wrong approach. (If you’re already going: “But you don’t …”, maybe skip this chapter. I’m leaving it in to illustrate the learning curve.)

A first step is to tell Spring to use a cookie-based repository instead of the default HTTP session one. The cookie name ‘XSRF-TOKEN’ is exactly what Angular is looking for.

protected void configure(final HttpSecurity http) throws Exception {

The client-side counterpart in Angular is enabled by default, but if you’re curious google for “CookieXSRFStrategy”. Now all that is needed is to initially get a XSRF token, and then Angular will pass that on as HTTP header in subsequent requests.

Except, this doesn’t yet work because the XHR are cross-origin. First we need to update our Spring configuration


and in Angular set

let options = new RequestOptions();
options.withCredentials = true;

to make the cross-origin HTTP requests aware of cookies (more here). The issue now is that the Angular CookieXSRFStrategy would need to read and store the XSRF cookie from the API domain, in order to set it as “X-XSRF-TOKEN” header. As the cookie is from cross-origin, Angular doesn’t have access to the cookie.

Now that the issue is to handle CSRF on cross-origin, the penny finally drops. What CSRF does is to prevent the server from regarding a request as valid based only on cookies. The approach is to require a specific HTTP header, with a value that is dynamic and verifiable on the server (thus some form of token).

Note on the side: I’ve read statements like “If your server requires a HTTP header “X-Requested-With: XMLHttpRequest” you’re already safe, because a browser will never send that header.” I disagree there, because an attacker can easily create CSRF attack using an ajax framework.

We’re planning to use OAuth2 to handle authentication, which means that our client will be sending an “Authorization: Bearer” header with every request. Here’s the HTTP header which will prevent CSRF! I’m disabling CSRF on Spring now…

protected void configure(final HttpSecurity http) throws Exception {


Login mechanism / OAuth2

(Whenever this blog mentions ‘OAuth’, we’re refering to ‘OAuth2’).

We’re heading for a token-based authentication (great post from Chris Sevilleja), in part for scalability, but also because we anticipate future clients accessing our API (both interactive and machine clients). OAuth2 seems like a natural choice (we discarded Firebase because funnel.travel will have an “all data is stored in Switzerland” option)

After getting a basic setup following tutorial like this one, the first issue is that the NG “OPTIONS” call is not authorized on /oauth/token. Spring seems to have some issues with CORS and OPTIONS calls, the main one that the Spring CORS filter is placed too late in the filter chain, such that security is applied first. As the OPTIONS call will never have an ‘Authorized’ header, it will be rejected. For now we’re using the hack described on this stackoverflow post.

While the Angular web client can technically be seen as an “application accessing server resources on behalf of a user”, the user experience of a login resulting in a OAuth2 access pop up along the lines of “funnel.travel web client is asking for permission to access funnel.travel server. Do you want to grant permission?” wouldn’t be great. So, for our Angular client we want to support the Password Credentials grant flow.

Spring’s TokenEndpoint is a bit messed up when it comes to that grant flow, because the TokenEndpoint implementation expects an authenticated principal going into the postAccessToken() method. In Authorization Code or Implicit flow, this is fine as we’d be sending the client id + secret along. But for Password Credentials, the RFC 6749 allows for a flow without providing client id + secret.

Our approach is to add a TokenEndpointAuthenticationFilter which checks if the request is a grant_type=password request from a trusted origin. There’s some additional security by obscurity, but eventually leads to adding a client authentication. From there, the TokenEndpoint will issue a valid token.

public void configure(final AuthorizationServerSecurityConfigurer oauthServerSecurityConfig) throws Exception {
            .addTokenEndpointAuthenticationFilter(new PasswordGrantAuthenticationFilter(environmentProperties.getAllowedCors()));

One weird effect we observed: when configuring basic web security, OAuth authorization server, OAuth resource server and finally OAuth method security in separate public classes, the resulting filter chain didn’t contain our custom filter mentioned above. Moving everything into one public class (with inner static classes) automagically resolved the issue.

Where are we now, what’s next?

Developer: We have an up and running Spring + Angular stack, CORS-enabled and secured with Spring Security and OAuth2, backed by jOOQ and PostgreSQL.

Sales guy: What, after all that time all you’ve got is a login page?

Up next: looking into i18n / l10n for Angular, which at first glance seems like rather over-engineered. But this is coming from server-side thinking of .