Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: formatting usability problems

...

The second task of contacting a user we decided to only provide a the poster's email. This is primarily for two reasons. First it maintains the minimalism of our site as recreating the inbox is not something that provides much functionality over email. Additionally, if you graduated you will probably not be visiting Jobious as often as the undergraduate body so you are unlikely to check your messages on Jobious(and we hate the 'You have a message!' emails) but you will likely check your email inbox multiple times a day. Secondarily, this site is designed to be closed to MIT students and alumni. When looking for information about a position while in the job hunt you may also be trying to solicit contacts within the company from the reviewer, in which case having a known working email to forward is far more valuable than a closed system.

Registration and Account Updating

We went with a fairly basic registration and account updating page. However we do require email verification.

...

We conducted our test in the conference area of the Student Center Reading Room using a group member's late-generation Apple laptop. No additional peripherals were provided. The team members each brought one user and functioned as an observer for that user.

Finding Users

We found users to test our product from within our living groups or were classmates outside of 6.831. All users were undergraduate men and women enrolled at MIT aged between 18 and 23 in non-6 majors. All had previously used MIT CareerBridge to search for jobs or applied to internships using a similar service. They all claimed competence in online tasks.

Briefing

We gave the users the following briefing. It is identical to what we gave users during paper prototyping. We did not think a demo was necessary due to the background of our users with job review sites and their fluency with technology.

Jobious is a web-based internship and job review service specifically designed for MIT students that combines aspects of both MIT's CareerBridge and InfiniteConnection.

Common tasks that a user may perform after logging in are:

  • Reviewing positions you have held
  • Browsing for internship positions recommended to you based on your preferences
  • Marking certain positions as your favorites for later review
  • Sending messages to other users to request additional information about positions they have held.

While participating in this study you should assume that you have previously created an account on the site and are already logged in.

Tasks

Our tasks mirrored those from paper prototyping as the major goals of using the site did not change. We removed or reworded some of the tasks to better fit the the implementation behaviors.

Task Number

Text Prompt

1

Favorite a job review that is recommended for you.

2

Read a recommended review and contact the user that wrote it.

3

Post a review about the position you held last summer.

4

Find out what positions John Curtice has held.

5

Review your favorites and decide you no longer like one of them.

6

View jobs that pay "a lot".

Methodology

We brought the users in one at a time placed them in front of the computer that was already logged in. The team member who led them in then became an observer while another(facilitator) handed them the briefing. The facilitator asked if the user had any questions about the purpose of the site. After clearing up any confusion the user had we handed them tasks 1 to 6 one at a time as they completed the previous one. At the conclusion we thanked them for their time and asked for any additional feedback they would like to give

Usability problems

Task Number 1:

...

Minor The star is really large because we thought that favoriting a review is something that should be easy to do. Our site is somewhat chunky, but this clickable interface seems to convey that there is something else it is doing(or should be). The easiest option is to make the star smaller, but it is unclear how small is too small without further user testing. Another option would be to keep track of the total number of favorites and display this number when the user favorites it to give them some notion of how many other people like it.

Task Number 2:

Problem: "Does the recommended page contain reviews or average reviews?"

Major This is a problem that we overcame last time with the accordion to group reviews together. This time, however, we decided that the encapsulation should be at an even lower level of the review as opposed to

Task Number 3:

Problem: "What do the numbers mean in the dropdowns? Is lower better?"

...

Severity

Task

Description of Problem

Times Encountered

Possible Solution(s)

Minor

1

"Does the star do anything else?"

...

(User expected additional functionality based on the size of the star)

1

Make the star smaller or display some information within the star such as the total number of people that have favorited the review.

Major

1

User believed that the recommended reviews were aggregates for a position rather than an individual review.

2

List the name of the user that wrote the review to reinforce that one individual created this

Catastrophic

2

User was unsure about the level of encapsulation of the results. He wanted them to be aggregates and clicked on the returned result expecting to be taken to a list of reviews

2

Aggregate the reviews for the same company together and make an intermediary page showing all reviews for that company.

Catastrophic

3

User was uncertain of whether 1 or 5 was a better review

2

Problem: "I have seen a review for Microsoft before, will this get grouped with those?"

...

Our paper prototyping model even has this right. Our last version should have included the same icon language as the rest of the site in the same fashion as in the paper prototype.

...

Minor

3

User misspelled Microsoft and did not catch it before submitting

1

Autocomplete based on company name and position

Minor

3

User misspelled Microsoft and did not catch it before submitting

1

Autocomplete here would have been key. While we do group them together in the background there

Major

...

4

...

User was unclear what was returned by search for term "John Curtice"

Minor

Task Number 5:

Problem: User had cognitive disconnect between "Star" and "Favorite"

Minor This was strange because we previously told the user to favorite a recommended job they did this without difficulty. However when they clicked on favorites it presented them with several that they had not clicked on(we reused the user account).

Problem: Unfavoriting a favorite should make it disappear immediately

...

2

List users in separate return area on the search screen

Minor

5

The user was distressed that clicking on the star didn't make it immediately disappear from the favorites page as it is no longer marked as a favorite.

...

1

Fire an AJAX reques to remove the div element from the page

Reflection

Function is more important than form

We got sidetracked with making our site appear too beautiful as opposed to making it comply with the task analysis that we spent time initially doing. If we were going to do this again I would keep those in a document always up on my desktop to make sure that I was always working towards making the user's experience better rather than just prettier.

Easy in paper prototype does not mean easy in code

Our paper prototypes displayed a lot of beautiful behaviors that are really, really difficult to achieve in software. This set our expectations really high based on the ease with which these things were said to be accomplished using modern tools. We did not find that to be the case so when we encountered problems and could not solve them after repeatedly banging our head against the problem we cut corners on usability to make the behavior function.

Get the model right first

We spent a lot of time going back and forth with the data model about what the correct way to represent all of the data and we ended up at a level that caused a lot of user distress. We do not believe that we ended up at the right modeling of the system for the presentation that we wanted to do, but if we had generated a more robust model initially we would not have had much trouble changing between them.

Task Number 6:

...

The Waterfall Model Doesn't Work

Perhaps the largest failure that we had was using the waterfall approach to design as opposed to spiral model. Even though we did have several iterations of prototypes, with each one we made some rather sweeping changes that were inferred to be better based on user feedback from the previous test. Unfortunately some of these decisions were headed just in a different direction rather than the right one. We should have then gone and prototyped those and tested them before deciding that was the right way.

User test aggressively

We made the cardinal sin of assuming we are our users. We didn't really ask people what they thought of our incremental implementations along the way choosing instead choosing to assume we could make the decision for them. While that might work for experts we learned that we do not yet have that level of fluency in design language and user behavior and that the only way to get there is to actively test and retest to verify your assumptions until they begin to match user behavior.

Project Management Decisions impact Design Decisions

We had a very basic working version early in the process but we then decided to scrap that code and start afresh with something we thought would work better. While it did make it easier to do many things, unfortunately none of us was as well versed in it as we hoped. This lead to a us often scratching our heads as to how to do something in Django when we could have written the script ourselves and moved on, though it would not have been as maintainable.