How Vrbo Engineers Revamped Their Web App

What does it take to pull off a major brand refresh? A whole team of engineers, designers, marketers, and more! The Vrbo engineering team has been hard at work coding and testing to launch the new and improved Vrbo web app to coincide with the brand reveal. Three Vrbo engineers are detailing what went on behind the scenes to make the release possible.

Martin Note, leading the UI Toolkit team, has been with Vrbo for over seven years and one of his main projects during the refresh was inspecting and updating the old code to get everything on brand and implementing the new Vrbo font.

“Working at HomeAway I’ve heard a lot of “HomeAway what’s that? Is that like Vrbo?”, so it’s fun and exciting to work at a company that people recognize what our product is. Also, as a former musical theater kid I love the new commercial!”

The Vrbo brand refresh gave engineers the opportunity to improve and “housekeep” things like font and style on the website.

“We commissioned a bespoke font (Freight Sans LF Pro) which we’ve never done before. Our family of sites share the same code base so we needed to make sure the typefaces had the same lining figures to avoid excessive overrides. Then, we essentially had to reverse engineer what Google Fonts does and apply it to our own product to host and load web fonts in a performant manner.” – Martin N.

Bongo Russom, Software Engineer, said his biggest takeaway from the refresh was being able to look at Vrbo holistically and test the site as a whole to discover areas of friction.

“A good example of this was the social sharing link preview images. Previously there was no standard for social sharing links for our applications. One of my teammates pointed out that there were instances in which the old Vrbo logo was displaying in poor resolution. I worked with Martin (who really did all of the heavy lifting) to come up with a design for better images to use for social sharing.” – Bongo R.

Throughout the refresh process, employees from all areas of the business came together weekly for “testing DoJos” where everyone would get in a room and actually test the site. With a step-by-step guide,  they’d test specific tools and practice booking a property as a traveler would.

“The testing DoJo was the first time in awhile we could all get together and test things out as a whole. The refresh inspired us to schedule more testing meetings across all the teams and start discussions about looking into usability testing.” – Bongo R.

Thomas Cardwell, Software Engineer, dove right in with the testing and recently booked a property in Barbados on the new Vrbo app!

“My friends set up a Trip Board together (one of the new Vrbo app features) and we used it on Android and iOS so it was a real-life use case. They loved that we could comment and talk directly within the app about the properties and we even voted to decide on the house we booked. It was a cool experience testing out the app in real life!” – Thomas C.

The collaborative Trip Boards allows travelers to chat about specific rentals within the app. When launch day came around, the teams were excited to see these features come to life with just the click of a button.

“It was cool being in the office the night we went live and having a ton of engineers around pushing out the updates and the app. Leadership did a great job of prioritizing updates and releases so we didn’t have to have every single thing perfect for launch day, we could continue to iterate in the coming days and weeks.” – Thomas C.

For all three engineers, this was the first time contributing to a major brand refresh and they all consider it something special to be part of.

“Working for a tech company for seven years, some people think that’s a long time in the tech world, but I’m working on a product that I love with great coworkers and we’re constantly adapting so I still love it!” – Martin N.

Follow Vrbo Life on social to learn more about what their teams are up to!

Vrbo Life Facebook

Vrbo Life Instagram

Vrbo Life Twitter

Vrbo on LinkedIn

Amazon DocumentDB Review

Gianluca Della Corte | Systems Architect, Hotels.com in London

Originally published on the Hotels.com Technology blog

On January 9th Amazon announced a new database service called Amazon DocumentDB that they described as a fast, scalable, highly available, and fully managed document database service that supports MongoDB workloads”.

Is Amazon DocumentDB a real MongoDB?

While offering a MongoDB-compatible API, DocumentDB is not running MongoDB software, but “Amazon DocumentDB emulates the responses that a client expects from a MongoDB server by implementing the Apache 2.0 open source MongoDB 3.6 API” on top of an undisclosed storage engine. From some information, it looks like it is built on top of the Aurora storage subsystem that is also used by both Aurora MySQL and Aurora PostgreSQL. In fact the following features/limitations are common to both DocumentDB and Aurora:

  • both replicate six copies of data across three AWS Availability Zones
  • both have cluster size limit of 64 TB
  • both do not allow null characters (‘\0’ ) in strings
  • identifiers are limited to 63 letters for both
  • both persist a write-ahead log when writing
  • both don’t need to write full buffer page syncs

High Availability

Amazon DocumentDB is designed for 99.99% availability and replicates six copies of your data across three AWS Availability Zones (AZs). DocumentDB availability goal is lower when you have less instances or when it is deployed in less than 3 AZs:

Fig. 1: DocumentDB availability

An Amazon DocumentDB cluster consists of two components:

  • Cluster volume: cluster has exactly one cluster volume, which can store up to 64 TB of data.
  • Instances: provide the processing power for the database, writing data to, and reading data from, the cluster storage volume. An Amazon DocumentDB cluster can have 0–16 instances:
     – Primary instance: supports read and write operations and performs all data modifications to the cluster volume. Each Amazon DocumentDB cluster has one primary instance.
     – Replica instance: supports only read operations. An Amazon DocumentDB cluster can have up to 15 replicas in addition to the primary instance.
Fig. 2: Deployment scenario

If the primary instance fails, an Amazon DocumentDB replica is promoted to the primary instance. There is a brief interruption during which read and write requests made to the primary instance fail with an exception. Amazon estimates this interruption is less than 120 seconds.
You can customise the order in which replicas are promoted to the primary instance after a failure by assigning each replica a priority, note that it is strongly suggested that replicas should be of the same instance class as the primary. It is also really important to create at least one or more Amazon DocumentDB replicas in two or more different Availability Zones, in this way your datastore can survive a zone failure.

Scalability & Replication

By placing replica instances in separate Availability Zones, it is possible to scale reads and increase cluster availability.

Compute and storage scale independently. It is possible to scale reads by deploying additional replicas. Scalability and storage are scalable up-to 64TB. DocumentDB automatically adds 10GB whenever it reaches capacity.

DocumentDB is also able to automatically fail over to a read replica in the event of a failure–typically in less than 30 seconds. Currently Amazon DocumentDB doesn’t support any kind of multi-region setup.

Amazon DocumentDB does not rely on replicating data to multiple instances to achieve durability, data is durable whether it contains a single instance or 15 instances.
All writes are processed by the primary instance that executes a durable write to the cluster volume. It then replicates the state of that write (not the data) to each active replica. Writes to an Amazon DocumentDB cluster are atomic within a single document.

Consistency

Reads from Amazon DocumentDB replicas are eventually consistent with minimal replica lag (AWS says usually less than 100 milliseconds) after the primary instance writes the data:

  • reads from an Amazon DocumentDB cluster’s primary instance have read-after-write consistency
  • reads from a read replica have eventual consistency

It is possible to modify the read consistency level by specifying the read preference for the request or connection (it supports all MongoDB read preferences):

  • primary: reads are always routed to the primary instance
  • primaryPreferred: routes reads to the primary instance under normal operation, in case of failover a replica is used
  • secondary: reads are only routed to a replica, never the primary instance
  • secondaryPreferred: reads are routed to a read replica when one or more replicas are active. If there are no active replica instances in a cluster, the read request is routed to the primary instance
  • nearest: read preference routes reads based solely on the measured latency between the client and all instances in the Amazon DocumentDB cluster

Operations

It is possible to create an AWS DocumentDB cluster using CloudFormation stack (as described here).

Amazon DocumentDB is a fully managed solution that provides the following features:

  • auto scaling storage (up to 64 TB in 10GB increments)
  • simple compute resource scaling (resources allocated to an instance can be modified by changing instance class)
  • built-in monitoring, fault detection, and failover
  • daily snapshots

AWS DocumentDB vs AWS ElasticSearch

DocumentDB and ElasticSearch have a lot of features in common, in fact you could even use ElasticSearch as a primary datastore. Some of the features they have in common are:

  • document oriented store
  • schema-free
  • distributed data storage
  • high-availability
  • replication

However, they come from 2 different database families and are made for different purposes. DocumentDB is a document store while ElasticSearch is a search engine.

Here are some key differences between the two:

  1. Indexing — ElasticSearch uses Apache Lucene for indexing while MongoDB indexes are based on traditional B+ Tree. Real-time indexing and searching power of ElasticSearch comes from Lucene, which allows creation of indexes on every field of a document by default. In MongoDB, we have to define the index, which improves query performance, but affects write operations.
  2. Writing — ElasticSearch is slower on adding new data. In ElasticSearch indexing semantics are defined on client side. Indexing cannot be optimised as well as with DocumentDB.

In practice, ElasticSearch is often used together with NoSQL and SQL databases. A datastore is used as persistent storage and source of truth, and ElasticSearch is used for doing complex search queries.

Another key consideration while evaluating DocumentDB vs ElasticSearch is the effort/complexity associated with an ElasticSearch domains definition, sizing and maintenance. It is not so straightforward to do it (in fact it is hard to correctly size storage, shards and instance size). AWS provides some good guidelines, but it is more complex than working with DocumentDB which doesn’t require these considerations.

Hotels.com Architecture team’s advice

Currently in Hotels.com we use many different datastores/search engines, so it is good to summarise our advice on when Amazon DocumentDB is or is not a good option.

Amazon DocumentDB is a good solution when you need to store unstructured data that doesn’t require too many indexes or complex search features. 
A good benefit is that you don’t need to care too much about queries upfront. This is particularly useful when you are not the owner/producer of the data you are storing, so you don’t need to adapt your schema to a possible new data structure (like you must do with a SQL database like Amazon Aurora) and you can query data also using new fields (thing that you cannot easily do using another NoSQL solution like Amazon DynamoDB, where your data schema is based on your queries).

It is also a good solution when you don’t need rich indexing capabilities and complex/fast search support (ranked results, full text search with partial matching without using regex, complex geospatial queries with inclusion/exclusion). For these kind of scenarios Amazon ElasticSearch is a better choice.

Currently Amazon DocumentDB has two big drawbacks:

  • no multi-region support
  • just provisioned mode (not available in serverless mode)

References

Hotels.com at dotSwift 2019

Lewis Luther-Braun | Hotels.com, London

Photo provided by dotConferences

In the last week of January, two engineers from the Hotels.com iOS team went out to Paris, to partake in the 5th annual dotSwift conference. For those who don’t know what a dot conference is, let me bring you up to speed. dot-Conferences are the equivalent of TED talks but more focused on topics from the tech industry; there are 7 different flavours of dotConferences: dotSecurity, dotScale, dotAI, dotGo, dotCSS ,dotJS and our very own dotSwift conference.

It was a great day to meet with other engineers from across the industry, as well as meeting other engineers that work within the Expedia Group — namely, members of the iOS team from Traveldoo in Paris.

The day was broken into 3 sets of talks with breaks between them.
The talks ranged from the sublime, how ‘pure swift’ apps aren’t really a thing as they all rely on the Objective-C runtime and ways of embracing Objective-C (instead of trying to get rid of any mention of it as fast as possible), to the ridiculous, such as a proposal on why you should use unicode characters in your code for method and variable names.

I feel like I should give this one a bit of explanation: 
The talk was far from suggesting that you do something like this;

⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⛵️⎈ ⬅

to tell your boat object that it should steer left. That notation could probably get away as a contemporary art piece but it’s definitely not useful as a standard for a naming convention. Instead it focused on scientific modelling and using the same notation that equations have, such as using Σ(sigma) for sum and λ(lambda) for wavelength as function and variable names respectively. This makes sense if you’re working with physicists who don’t want to look at long function names (no matter how descriptive they are) and also gives them an opportunity to debug the algorithm as opposed to your code.

Photo provided by dotConferences

It was brilliant to hear ideas from some very talented individuals — we even got to hear talks from people working on open source projects at Apple, such as SwiftNIO (an asynchronous event-driven network framework)— which gave real insight into what problems they were encountering and how they went about solving it.

As well as the main talks there were a number of lightning talks given by members of the Swift community. These were super quick talks that were straight to the point, often providing points of thought or presenting useful approaches to problems or tips.

Photos of the talks are available at https://dotswift.io.
Videos are available to watch: https://www.dotconferences.com/conference/dotswift

I’d highly recommend giving them a watch — maybe you’ll find a solution to an issue that you are currently encountering or learn something new.

Three things I learnt being a scrum master

Giuseppe Sorrentino | User Interface Engineer, Hotels.com in Rome

Originally published on The Hotels.com Technology Blog

Introduction

I am very happy to have had the opportunity to work in the Agile world for almost 4 years, that have been fantastic and challenging.

Being a Scrum master is an invaluable experience and makes you understand and reflect a lot about company processes and software development in general.

It is very hard to discover and address disfunctionalities in teams’ processes. In fact, disfunctionalities are often sneaky. Metrics and surveys can help you but you need to develop an insight to recognize them and this helps you improve a lot as person and professional.

I decided to share with you three thoughts I noted down in these years.

1. Training is not enough, make it real by being assertive (when necessary)

In these four years I did tons of training. Prepared tons of presentations on the various agile practices and artifacts: Kanban, Scrum, backlog and backlog refinement, pair programming are only examples.

One thing I learnt is that while training on agile is valuable, practice is more valuable. The capacity toward making practices real in day to day life is fundamental in the scrum master profession. In order to do that there are two different and antithetic approaches:

  • wait that a practice emerges in the team
  • be assertive and effectively contribute by pushing for the application toward some beneficial practices.

Being able to find the right balance between these two approaches is a fundamental key in a scrum master role. In a perfect world the Scrum master would always choose the first approach. But in the real world, this is not always feasible. For example, there could be situations where it is not possible to wait until the team becomes mature enough to adopt a practice. On these occasions, in my honest opinion, is when the Scrum master needs to be assertive.

2. If you want to go with Kanban, start with Scrum

I am assuming you are familiar with the Tuckman’s stages of group developmenthere.

The Tuckman’s stages of group development

It is harder to start directly with Kanban than starting with Scrum and transitioning to Kanban. In fact, Kanban requires much more discipline from the team than scrum. Pulling stories at the right time, limiting the amount of work-in-progress items, are very challenging tasks, even for a very small group of people. This makes Kanban more functional in the teams that are in the norming or performing phase or however not at their beginning. While scrum being more prescriptive, is perfect for a team in the forming and storming phase.

It is a good idea to start with Scrum and transition smoothly to Kanban when you feel the team is ready, or rather when the team is entering in the norming/performing phase. There are many indicators a team is transitioning toward the norming/performing phase:

  • stability in practices adopted
  • stability in team composition
  • continuous success of sprints
  • self-organization in main scrum ceremonies
  • stability in velocity and throughput.

3. Scrum application outside the software world often is not clear

While scrum is supposed to be an universal framework, in the sense it should be applicable outside of software world, this application is not always immediately clear.

In Hotels.com we give training on Agile to very different functions and we encountered difficulties in recognizing a way to apply scrum to certain realities outside of technology. For example there is not so much literature on how backlog items should be documented. Neither is clear how to manage realities where we have mostly personal work rather than team work.

Conclusion

I had four challenging years as Scrum master and this opportunity make me grow as person as well as IT professional. During these years I had the opportunity to reflect on some aspect of the Scrum master practices.

Particularly I discovered that the Scrum master need to be assertive and effectively contribute by pushing for the application toward some beneficial practices when necessary. The natural emergence of all the team practices is simply a Scrum myth.

I, furthermore, think that Starting directly with Kanban for a new team can be counterproductive. My suggestion here is to evaluate Scrum as bootstrap for Kanban.

The last point: the fact that Scrum universality (its application outside of IT projects) is not crystal clear. Under this point of view a great community effort to make Scrum more accessible is needed.

Thanks to Gayathri Thiyagarajan.

Finatra in a Haystack

Originally published on The Hotels.com Technology Blog

Ryan Burke | Software Development Engineer, Hotels.com in London

Haystack is an Expedia-backed open source project to facilitate detection and remediation of problems with enterprise-level web services and websites. Haystack uses tracing data to help locate the source of problems, providing the ability to drill down to the precise part of a service transaction where failures or latency are occurring — and find the proverbial “needle in a haystack”. Once you know specifically where the problem is happening, it’s much easier to identify and understand the appropriate diagnostic data, find the problem, and fix it.

Finatra is a web framework created by Twitter built on top of TwitterServerand Finagle, it is the web framework of choice for the majority of Scala core services at Hotels.com. Recently, we wanted to integrate our services with Haystack in order to have distributed tracing information across service boundaries.

Finatra supports out of the box tracing using standard Zipkin X-B3-* HTTP headers. In order to report this data to Haystack we needed to publish the tracing data to a proxy service we have running which forwards it to both Zipkin and Haystack.


zipkin-finagle

Fortunately for us, zipkin-finagle provides functionality for reporting tracing information over a network. This library allows for tracing information to be sent via HTTP, Scribe, or published to a Kafka topic. Creating a new zipkin tracer is simple once you bring in zipkin-finagle as a project dependency:

val config = HttpZipkinTracer.Config.builder()
.host("zipkin-host:80")
.hostHeader("zipkin-host")
.initialSampleRate(0.0)
.compressionEnabled(true)
.build()
val tracer = HttpZipkinTracer.create(config, statsReceiver)

In the Finatra app’s HttpServer class you have the ability to set the tracer and label to be used in reporting by overriding the configureHttpServer function.

override def configureHttpServer(server: Http.Server): Http.Server =
server
.withLabel(“service-name”)
.withTracer(tracer)

After this, sending tracing headers to the service will result in the data being published to Haystack for visualisation. If you’re using Finagle clients to call other services as part of a request, these will automatically be propagated and all your dependencies will show up too.

Haystack tracing visualisation

Dealing with Futures

Finatra and Finagle are designed to operate in a non-blocking asynchronous way, allowing it to scale and keep the overhead of accepting a new request low. There is no global requests thread pool to configure, just don’t block when you’re handling the request. As such, when we are dealing with asynchronous code we don’t have the concept of a single request thread to do things like MDC, which is how you would normally keep track of per-request state such as tracing information.

When using Scala Future[T] we need some way to manually keep track of the tracing information between thread boundaries. We found there was no elegant way to do this without creating a wrapper around Future which copies a context between execution threads. Alternatively you can create a custom ExecutionContext in which the Future can run that provides the same functionality. Problems arise when you use a third party library or some bit of code that doesn’t allow you to define the ExecutionContext or the return type.

Twitter were an early adopter of Scala and provide a util library which duplicates and builds upon the Scala standard library features. This includes the Twitter Future, a cancellable Future with no ExecutionContext to manage and the built-in ability to keep track of a Context across thread boundaries. The Finatra server uses them at the edge and Finagle clients return Twitter Futures too. If you use them throughout your application instead of the standard Scala Future then you’ll get tracing propagation for free, at the expense of being a little more tied into the Twitter ecosystem.


Twitter Service Loader

One thing to watch out for is the zipkin-finagle library defining a service in the META-INF/services folder. Finatra uses Guice for dependency injection and if a library defines a file in the services folder then it will auto-magically be created for you and registered in the service registry. This can make it easier to integrate with Zipkin, you can ignore all the code changes above and instead set some environment variables to let the library create and register the service for you.

In my team we tend to prefer explicitly defining behaviour rather than relying on magic components of frameworks to do this for us. It’s why we moved away from Spring, manually wire everything, try to avoid internal shared libraries and write our own request filter logic.

Once we manually wired the tracer using withTracer we assumed that this would override the one being created from the service loader, but we were wrong. Both were being created and running at the same time, causing the unconfigured default tracer to throw errors (it defaults to sending data to localhost). In order to disable this we have to modify our Docker file to add an additional Java opt:

ENTRYPOINT [“/bin/sh”, “-c”, “exec java $JAVA_OPTS -Dcom.twitter.finagle.util.loadServiceDenied=zipkin2.finagle.http.HttpZipkinTracer -jar service.jar $0 $@”]

This is a bit nasty, we have a hard coded class name in our Docker file and if it ever changes name then it’ll start loading two HttpZipkinTracer instances again. That’s the cost of being able to define the tracer ourselves.


Shameless plug

We are are hiring! If you’re passionate about software engineering and what we do sounds interesting check out our roles!

Deciphering Product Roles

Amanda McArthur | Talent Advisor, Expedia Group in Bellevue, WA

Product, Technical Product, and Program Management. If you are in the product world, you know the struggle is real. Companies (and sometimes even teams) have different definitions for each. It can be difficult to understand what roles are a strong fit given your background and personal career goals.

My goal here is to help you maneuver Expedia Group and find exciting opportunities with us that are more in-line with your experience or career goals.

First, the Program Manager:

In several large tech companies, this is a title predominantly used to describe someone who is closely aligned with Engineering. Generally speaking, within Expedia Group, the Program Manager is more focused on business process and programs. With one exception; the title Technical Program Manager is used in a few divisions and the responsibilities are similar to a Technical Product Manager.

This role is great for someone who excels at surveying the ‘big picture’. You enjoy finding and fixing inefficiencies. You build business processes and programs that scale, are streamlined and cross-functional. Like most other Product or Program roles, you are also an excellent communicator who is able to build consensus through influencing without authority.

While searching, I would consider areas of expertise as well and use keywords as part of your search to narrow your results. Maybe your area of specialty is talent acquisition, business operations, finance, or marketing. If you do have a functional area that you are focused within, do include it in your search.

https://lifeatexpedia.com/jobs/?keyword=Program%20Manager

Technical Product Manager:

Within the Expedia product ecosystem, we have both a Technical Product Manager (TPM) and a Product Manager. As a TPM, you are more closely partnered with Engineering teams.

All of our teams follow the Agile methodology, which means you can expect to attend (if not lead) daily standups. You will likely build user stories and participate in sprint planning. The lengths of our sprint cycles vary by team. Some could be as short as a week, others are a few weeks. We have a ‘Test and Learn’ culture and a bias toward action – giving our teams the ability to move faster with less red tape.

While most roles don’t require a background in software development, it does help in most cases. I’ve seen a lot of Engineers make a successful transition from development to TPM. It’s a natural progression for those wanting to take on broader responsibilities over product creation. You’ll partner cross-functionally with several teams. You act as a liaison and help your less technical counterparts understand technology constraints and possibilities. You’ll also help to communicate timing for execution, helping to prioritize feature work within the roadmap.

Keep in mind if you’re looking to move into Technical Product Management, there are some TPM roles that definitely need someone who comes from a hands-on development background. While this isn’t the norm, I have seen roles where the TPM would continue to own some code as part of their broader responsibilities.

https://lifeatexpedia.com/jobs/?keyword=technical%20product%20manager

Product Manager:

This is purely my opinion, but I believe finding the right Product role is pretty tricky. The level of technical aptitude needed to be successful is different for each team and depends heavily on the product space. Because most of our Product teams are dealing with digital products, the level of technical knowledge needed tends to be on the higher end of the spectrum.

That said, there are definitely Product Management roles that are more focused on stakeholder management, strategy, or user journey and UX. As the Product Manager, you own the roadmap planning, feature release cycles, backlog prioritization, varied levels of reporting, and product related problem-solving.

In general, all of our Product Management teams are going to be looking for someone who is comfortable working in a highly matrixed organization. Because a lot of products span multiple brands, you may have several stakeholders and they could be located all over the world. That means that not only will you work cross-collaboratively with UX, Engineering, Marketing, etc. you may also have the added complexity of working across brands. For someone who’s looking for more complexity, this may be perfect for you.

https://lifeatexpedia.com/jobs/?keyword=product%20manager

A few things to keep in mind:

Our teams are truly Global. I know, on the surface this doesn’t sound very different from other large tech companies. I’ll explain. I’ve worked with some companies that have a large global footprint; however, in a lot of cases, the product work was dispersed by location. London had their part, Sweden had another, and both were part of a larger body of work. In those cases, they had regular check-ins but the interdependencies were fewer which required less coordination. In our case, your immediate team may have a global footprint. It’s possible that you’ll be managing close dependencies where you’re coordinating with immediate team members located on the other side of the globe.

Your Search:

First and foremost, don’t be discouraged if one position isn’t the right fit. If you are a Product veteran you probably already know how unique each position is. Maybe you don’t have enough experience with complex information architecture, but nail the customer experience and user journey. Everyone has different professional experience and those are the things that will make you a unique fit for the right team.

Meet the HomeAway UX Research Team

After learning more about what our UX Research Team does, you may start to think their jobs resemble that of undercover spies. Between the two-way mirrors, eye tracking glasses, and emotion recognition software, it’s safe to say they get to work with some pretty cool technology. This group plays a crucial part in product development because they are constantly testing, reporting, and providing recommendations on the latest updates and additions to the HomeAway website and native apps.

Here’s a closer look at what they do and what it takes to be successful researchers in their words:

The team hanging out in their comfy observation room.
The team hanging out in their comfy observation room.

Q: Let’s start with the basics, what does the product release and research process look like?

“We start the research process by meeting with the design and product teams to gather feedback from key stakeholders on the specific goals of the study. Then, we prepare a brief to outline the objectives, the method of the study, and the profile of the participants. Once the brief is completed, other researchers typically review it.

Throughout the process, we hold several meetings with the project stakeholders to keep them informed and complete updates on the different deliverables needed such as the status of new study prototypes, the study guide, and recruitment of the participants. Once the sessions have been conducted, we spend time analyzing the data, then we write a report to present the findings and recommendations back to the project stakeholders.” – Sara, User Experience Insights Senior Manager

Q: What problems is your team solving?

“We do research to understand our users and optimize their experience on the HomeAway website and app.” – Aniko, Sr. UX Researcher

“One of my favorite (very Texas) quotes about the difference between UI (User Interface) and UX (User Experience – the research we do) and how our work impacts users: “UI is the saddle, the stirrups, and the reins. UX is the feeling you get being able to ride the horse and rope your cattle.” – Tim, UX Researcher

Part of the team at the 2017 holiday party in Austin. (Left to right: Karl, Aniko, Drew, Jenn, Stephanie, Tim)
Part of the team at the 2017 holiday party in Austin. (Left to right: Karl, Aniko, Drew, Jenn, Stephanie, Tim)

Q: That’s a great visual! What’s an interesting project you’ve worked on lately?

“I recently worked on a UX test for the Reservation Manager tools used by our partners in four different countries. It’s been very insightful because the test revealed some UI opportunities across markets and helped us to prioritize the right enhancements to the product and design teams.” – Sara, User Experience Insights Senior Manager

“I tested HomeAway television ads using methods from cognitive neuroscience to understand what engages our travelers. We used eye tracking, facial expression recognition software, surveys, and interviews to learn what makes travelers experience those heartwarming feelings you get when you’re on vacation. It’s been really fun working together with UX Research, UX Content, and the Marketing teams to apply the scientific mindset and help HomeAway’s content shine.” – Drew, UX Researcher

“I think the Northstar (new design) concepts are probably the most fun because they are progressive and it’s fun to work on the next big thing. I’m excited to contribute to the development of our latest designs by collecting traveler feedback on prototypes in our Austin lab space.” – Lukas, Sr. UX Researcher

“Working with our team and other stakeholders to make sure we’re doing the most impactful research, and planning for our next-generation labs.” – Karl, Director of User Experience Research

Aniko preparing a participant.
Aniko preparing a participant.

Q: What does it take to be successful on your team?

“Good communication, be personable and understand when to speak and when to listen.” – Tim, UX Researcher

“Great people skills and attention to detail.” – Stephanie, UX Research Producer

“The curiosity to want to understand ‘why,’ the discipline to employ the right scientific approach to uncover answers, and the passion to see the answers get turned into positive changes to the product.” – Karl, Director of User Experience Research

Q: What’s something you’ve learned since joining this team?

“How expansive the research is at HomeAway and how wonderful it is to have buy-in from so many different teams regarding our research.” – Tim, UX Researcher

“Using the emotion recognition software and survey tools” – Aniko, Sr. UX Researcher

Prioritizing one project over another can be tough because we want to answer ALL the research questions we can. We’re problem solvers and answer seekers.” – Lukas, Sr. UX Researcher

A HomeAway employee trying out the emotion recognition software and eye tracking glasses.

A HomeAway employee trying out the emotion recognition software and eye tracking glasses.
A HomeAway employee trying out the emotion recognition software and eye tracking glasses.

Q: Any funny stories you can share from past studies?

“Funny stories? You have to sign a nondisclosure agreement first!” 😉 – Jenn, UX Researcher

Q: Ah we get it, you can’t tell us because of privacy rules. Do you have a favorite program or tool?

“Python, specifically the Pandas, NumPy and SciPy libraries” – Drew, UX Researcher

“Eye tracking and the two-way mirror in the London Innovation Lab. I also enjoy using our emotion recognition software.” – Sara, User Experience Insights Senior Manager

“I’m really interested in all of our lab equipment like PTZ cameras, rack-mounted recording and streaming, and figuring out how we can incorporate future technologies into our testing.”  – Tim, UX Researcher

Q: Last question, do you celebrate a little after you wrap up a test or move on to the next project?

We do celebrate sometimes after we successfully complete a user study or after our recommendations are well received. – Aniko, Sr. UX Researcher

“I get a little adrenaline rush when the last participant completes the session. Then it’s time to debrief with any observers and start thinking about what all those observations mean when taken together. – Lukas, Sr. UX Researcher

The moderator workstation, aka: what it looks like to be on the other side!
The moderator workstation, aka: what it looks like to be on the other side!

Want to join Team HomeAway or check out other cool perks we offer? Visit our careers page!

Follow Life at HomeAway on social media

Expedia Group embraces web accessibility as an opportunity to make a difference in the lives of disabled travelers

Toby Willis | Software Engineer in Test II in Seattle, WA

Did you know there are over 1 Billion people with disabilities in the World?

…that’s like more than 15% of the population. I’m learning these stats because I lost my vision due to a degenerative retina condition known as Leber Congenital Amaurosis. You can also read more about my experience in The Seattle Times and US News & World Report.

I functioned as a sighted person growing up and for the first part of my professional life; working with my hands in a variety of fields from construction to manufacturing and even sold a successful startup industrial maintenance business. I sold that business because my vision had deteriorated to the degree that I was unable to drive, efficiently read print, and QA my employee’s work.

Subsequently, I return to the University to learn a new skill set. That is where I rediscovered tech. I had done some simple programming throughout junior and senior high school but had not thought much about code outside of the simple scripts running on primitive manufacturing equipment I often maintained. I landed a job in the Adaptive Technology Center (ATC) at Middle Tennessee State University where I had enrolled to finish my undergraduate. There, I helped other students with disabilities access information and learn to use assistive devices and software to live more independently.

I decided to pursue a degree in Recording Industry Management being that I am a lifelong musician and already had a degree in music. As my vision got slowly worse and the recording equipment and software got smaller and more complicated, I found it increasingly difficult to be efficient and compete in Nashville where I lived at the time.

Working with other students with disabilities in the ATC gave me valuable insight into the challenges we face as users who depend on assistive technologies to access information and gain an education. My vision was gradually getting worse, and the magnification equipment and software just couldn’t make the print big enough any longer. I began relying on a screen reader application to speak the contents on the screen around this time in my life and started to realize how difficult it is for people with disabilities to be productive and independent in an increasingly digital world. That’s when I pivoted my career toward “Accessibility.”

After working my way up through the University, I took a Director of Student Disability Services at Nashville State Community College. It happened that the college had a large Deaf population which made for a great learning experience for us all in better communication. Later, I took a job at City University of Seattle. That’s how I got to the PNW. Although I enjoyed working with students at the University, I wanted to really dive deep into a problem and make a meaningful contribution. Being that our World was then and still is more and more on the web, I wanted to help solve the problem of web accessibility, or the lack thereof.

In 2014, I heard that Expedia Group was looking for a screen reader user to consult with the Client-side Engineering team to improve the usability of the website and mobile app for customers with disabilities. I jumped at the chance because I love to travel and was never able to independently book a trip using a screen reader. I came on board in August of 2014 and was pleasantly surprised to learn that Expedia Group embraced web accessibility as an opportunity to make a difference in the lives of disabled travelers while making our products better for everyone. We really dug into Inclusive Design practices, good markup and architecture, and building adequate testing protocol. I’m proud to have been a part of creating what I believe is the best eCommerce experiences a screen reader user can find on the web.

I am not disabled, there are only barriers that are more difficult for me to surmount. If we work to remove those barriers, people can live more independently, be more productive and make a more meaningful contribution to society. Disability is the largest minority in the World. Many individuals with disabilities want to participate in society but simply can’t due to the physical and social barriers that exist. Around 50 percent of people with disabilities are unemployed, underemployed, or marginally attached to the workforce. That means there is a huge untapped talent pool waiting to participate in making everyone’s life better.

You can help remove a barrier by opening your mind to disability as diversity, working to overcome the conscious and unconscious disability bias, and make an effort to include someone with a disability in your professional and personal life.

“Diversity is inviting me to the party; inclusion is asking me to dance…” Author unknown (possibly attributable to Verna Myers on Twitter)

Distributed GraphQL Schemas with NPM Modules

Trevor Livingston | Principal Architect, HomeAway in Austin, TX

Photo of GraphQLHow HomeAway is utilizing npm modules and schema partials to create GraphQL components for self-orchestrating apps and services

HomeAway uses the simplicity and flexibility of GraphQL to insulate applications from change and accelerate UI and API development.

 

For a little over two years, we‘ve been busy replatforming our web applications at HomeAway to Node.js using hapi and React. Last year, we sought to simplify reuse for data fetching and orchestration between our native mobile applications and the many new web applications being developed.

GraphQL lets developers provide data to both web and native experiences while allowing the resolution of how that data is provided to evolve over time, mitigating the impact to the many, many UI components we have developed. This is, in part, is why Facebook developed GraphQL.


Although we were already impressed by the power and simplicity of GraphQL, the typical process of schema creation meant that they were not easy to share between applications unless shared as common services, which introduced a model prone to issues at scale (see Killing BFFs with GraphQL).

Although anyone could have begun adopting GraphQL at any time for their application, we sought to operationalize GraphQL at scale for all of HomeAway. To do this, we wanted to develop tooling that allowed us, among other things, to:

  • Provide support for internal concerns such as logging and metrics.
  • Enable reuse between applications through modules.
  • Enable developers to pick the types of queries they needed for their application.

This led us to the development of a convention we refer to internally as a “GraphQL partial”. While breaking up schemas into multiple files isn’t a new thing, componentizing them requires a little glue.

A GraphQL partial is simply an npm module that exports enough information for us to construct an executable schema with. That means a partial needs to export some types as well as the type resolvers as needed.

A contrived GraphQL partial exampleA contrived GraphQL partial example

You will notice that the query type in this example uses the extend keyword. This is because there will be many partials defining queries or mutations and to allow this, an empty root query and mutation will be provided by our tooling for these type definitions to extend.

Once a partial has been defined, what remains is to declare the partial schemas to use and stitch them together into a single executable schema.


As mentioned earlier, HomeAway uses the hapi framework for building applications. In addition, we use a module for bootstrapping the hapi server through an environment-aware hapi configuration engine called steerage (link).

Example of JSON configuration for steerageExample of JSON configuration for steerage

steerage makes it easy to configure the partials and setup a GraphQL server in a consistent fashion, and once the partials have been specified, they can be stitched together. HomeAway uses Apollo to serve GraphQL, although we wrap it to inject context and accept and merge GraphQL partials.

Apollo also makes some other useful tools, one of which is the makeExecutableSchema in the graphql-tools module.  makeExecutableSchemabrings together type definitions and resolvers into a single schema.

makeExecutableSchema example from Apollo’s graphql-tools documentationmakeExecutableSchema example from Apollo’s graphql-tools documentation

So far, we haven’t done anything particularly different from a well-known pattern for breaking up schemas. The challenge in breaking up schemas really surfaces when you want to publish them as separate modules, especially when it comes to root types.

This brings us back to our use of the extend keyword and the little bit of utility we wrapped on top of the GraphQL server. Our server adds the empty root types and merges the different types and resolvers exported by the partials. Lastly, it uses makeExecutableSchema and passes the result onward. We also use additional tooling to detect type conflicts ahead of time.

Example of merging partials.Example of merging partials.

Adding empty root types.Adding empty root types.

 

The final bit is providing the empty root types for each partial to extend. Rather than provide an entirely empty root type, we use a _null attribute with a no-op resolver to enable merging multiple schemas.

Empty root typesEmpty root types

The result is a simple utility that enables different applications to pick and choose their query capabilities.


Although the capability to build and reuse partials empowers teams to more easily craft schemas for their use cases, there are additional challenges to overcome.

As the number of partials grows, encouraging wide-spread reuse and discouraging redefining existing types can be challenging without good discovery practices, such as collocation of partial modules and excellent documentation.

The GraphiQL IDE presents another challenge. GraphiQL is intended for interacting with a single schema; with many partials this schema can grow very large. This may make it difficult to view all possible partials in a single place.

Shakespeare and Company bookshop
Shakespeare and Company bookshop. Alexandre Duret-LutzCreative Commons

Finally, testing presents additional considerations. Since the partials are separate modules, applications incorporating them may not know how they are resolved upstream. Services, for example, must be accessible or mocked, and this requires discovery of what these upstream services are.


Today GraphQL — and our partials paradigm — has become our de facto standard for UIs to interact with and query data. We use GraphQL in our native mobile applications as well as multiple web applications.

To date, we have used GraphQL primarily to fulfill UI requirements, but we have begun to experiment with GraphQL for our public APIs as well. While REST and Swagger/OpenAPI have been the go-to for public API platforms for years, I believe we will begin to see more and more general purpose APIs developed with GraphQL.

Follow us here for a future post describing our adoption of Apollo 2 and the changes we’re making to make our partials more powerful and composable. I hope you’ve enjoyed this article. See you soon!

Monolith to Micro-service and Beyond…

Anurag Banka | Software Development Engineer II in Gurgaon, India

Anurag Banka smilingIn this post, I would like to give a glimpse of a practical application of micro-service on a monolith product which leads to better team productivity, customer experience, and product scalability.

Monolith are services; which are not easy to scale, hard to maintain, and can become a bottleneck for the growth of the product. Rapidly changing customer demand and business circumstances need a flexible and scalable system where new ideas can be introduced at a fast pace. Most of the monolithic services have a fixed release cycle of bi-weekly or monthly due to the cumbersome nature of testing and tight coupling of the domain.

By breaking a complex monolith architecture into a micro-service architecture, based on the different responsibilities of product, creates a solution for scaling both system and business. Articles from Martine Fowler and Chris Richardson are a great source of learning to bring best of micro-service practice in your domain. A typical transition from monolith to micro-service looks like below.

If you can’t explain it simply, you don’t understand it well enough.” – Albert Einstein

The above statement is very well applicable for monolith service. It’s applicable to all big and small organizations. With a rapidly changing product requirement and team members, it’s a challenge to retain domain knowledge – and existing test framework was never sufficient enough to cover each expects of a system under test.

Micro-services are definitely a solution to a problem faced in monolith but it’s no silver bullet and several challenges occur to reach in a state of micro-services. Some big challenges to face while applying micro-service architectural reform in a billion $ system are

  • Defining testing strategy for a new stack
  • Defining new monitoring methods
  • Ensuring high uptime of a system
  • Collective domain knowledge

Knowing your domain is key to success for breaking any monolith system into Micro-service, but it’s never the case –  you know all your domain and dependencies via any testing framework may cover most of it but some corner cases might be missing.

Known risk can’t be taken for a live running system if you have a slight doubt on your domain understanding or testing suite. Black box testing (shadow testing) is a solution for building a new system in parity with the old one.

A three-front testing framework to ensure parity at upstream, downstream and database can help in building confidence in migration to the new stack. A typical orchestration of such black box testing would look like the below when at every external end parity will be ensured.

Following the above strategy, it was easy to catch approx. 500 bugs in the new stack. Also, the same framework resulted as a bridge between old and new stack for easy migration. It provided both system performance and business performance metrics to measure the success rate of the new system.

Every change for making the system better should be measured in terms of success metrics of the system and some of the metrics we achieved in the last 6 months are:

  • More than 1% improvement in success rate, direct impact on revenue
  • Easy scalability of functionality
  • Easy rollout and rollback, N releases in a day vs once a month release
  • Cloud Native solution
  • Faster and Better customer support

At Expedia Group, we practice in keeping our product as simple as possible. It helps in taking fast business requirement adoption and building an internal open source culture where a team can collaborate and speed up delivery of new ideas.

Every new system comes with a new set of challenges, now you have thousands of services and a ton of data to make a better business decision for new success stories. This is just the beginning of a technology shift, we are on our journey of cloud, machine learning…

Come and join us in our journey of “Bringing the world within reach” through the power of technology.