Posts tagged “software”

New features on All This ChittahChattah

Thanks to some great technical sleuthing and hacking, we’ve got some good improvements here on the blog
– an “email this post” link with every post
– an improved “tag cloud” off the right that links to other postings, rather than back to Technorati (who seem to have stopped indexing this blog 55 days ago)
– The blog title is now All This ChittahChattah rather than Portigal Consulting both on the page and in feeds
– I got rid of CoComment because it was (as others had suggested when I started using it) causing some problems. It was slow and was messing up pingbacks (i.e., posts here that refer to other posts here)

Please let me know if you see any weirdness or broken stuff that needs to be fixed!

Lovely Phone; Ugly Software

David Pogue reviews some new phone in Lovely Phone; Ugly Software. I’m mostly interested in the headline, though.

I’m so sick of this as the status quo. Aren’t you?

Award-winning, or attractive industrial design is achievable. Usable, joyous, lovely software is achievable. Why is the combination so damn hard? When will companies figure out how to do better? As advanced as we think we are in these fields, it seems big companies are still launching stuff that wrecks your life while making you look hip. We can blame it on organizational silos, or increasingly complex design problems as screen sizes gets smaller and usage gets more advanced, but I think there’s a cultural problem (of course) in organizations, as they still don’t get it. They aren’t figuring out how to work together and they aren’t setting high enough standards for what’s good enough to launch.

Sure, this is Motorola in this article, but the story seems so familiar, this could be anyone. I don’t propose simple solutions here, but I do feel so very tired of the problem.

appalled, indeed

37 Signals jumps the shark (if they hadn’t already)

While we appreciate customers who take the time to write in and tell us what they want, the way people phrase things often leads to raised eyebrows. Every feature that’s missing is essential, a must-have, and the fact that it’s missing is killing someone. Yet the #1 thing that people like about our software is how simple it is. To give you an idea of what it’s like to be on the receiving end, here are some excerpts from recent 37signals support emails and forum posts

The excerpts are meant to ridicule the customers/users/people who contact them. For being too intense or too clueless or in whatever way just not as cool as the folks at 37 Signals.

This is a company that makes software but also wants to teach the world about making great products. I’m not sure that their products are really that great, but their credibility for teaching anyone how to do anything is nil once they start using the bully pulpit of their own blog to mock people – customers! And of course, there’s an ensuing pile-in on the comments “hyuk-hyuk, people are morons.” It’s too easy to get your pals to agree with that sort of thing, and ultimately it reveals contempt for the wrong people. That’s a critical failure at the root of what they are setting out to do.

Update: several other bloggers agree with me.

Baby and toddler education technology – is it bunk?

The New York Times does a great cover story about all the technology products that make strong and unsubstantiated claims about how much smarter they’ll make your baby.

New media products for babies, toddlers and preschoolers began flooding the market in the late 1990’s, starting with video series like “Baby Einstein” and “Brainy Baby.” But now, the young children’s market has exploded into a host of new and more elaborate electronics for pre-schoolers, including video game consoles like the V.Smile and handheld game systems like the Leapster, all marketed as educational.

Despite the commercial success, though, a report released yesterday by the Kaiser Family Foundation, “A Teacher in the Living Room? Educational Media for Babies, Toddlers and Pre-schoolers,” indicates there is little understanding of how the new media affect young children – and almost no research to support the idea that they are educational.

“The market is expanding rapidly, with all kinds of brand-new product lines for little kids,” said Vicky Rideout, vice president of the Kaiser Foundation. “But the research hasn’t advanced much. There really isn’t any outcomes-based research on these kinds of products and their effects on young children, and there doesn’t seem to be any theoretical basis for saying that kids under 2 can learn from media.

In 1999, the American Academy of Pediatrics recommended no screen time at all for babies under 2, out of concern that the increasing use of media might displace human interaction and impede the crucially important brain growth and development of a baby’s first two years. But it is a recommendation that parents routinely ignore. According to Kaiser, babies 6 months to 3 years old spend, on average, an hour a day watching TV and 47 minutes a day on other screen media, like videos, computers and video games.

Others have less restrained marketing: The “Brainy Baby – Left Brain” package has a cover featuring a cartoon baby with a thought balloon saying, “2 + 2 = 4” and promises that it will inspire logical thinking and “teach your child about language and logic, patterns and sequencing, analyzing details and more.”

“There’s nothing that shows it helps, but there’s nothing that shows it’s does harm, either,” said Marcia Grimsley, senior producer of “Brainy Baby” videos.

Incredulous italics mine, of course.

Rambling thoughts on “User Research Strategies: What Works, What Does Not Work”

Last night’s BayCHI event was a good experience. A panel of champions of user research (Director, Manager, Lead, etc.) at key Silicon Valley companies (Google, Yahoo, Adobe, Intuit, and eBay) attracted a large and energetic crowd. The pre-meeting dinner was extremely well-attended. It felt like the discipline is having a good moment in the zeitgeist.

Each panelist gave a 10-minute summary of what’s going on at their firm. What types of methods they are using, how they are feeding design, strategy. How they might interface with market research and other areas of the business. Where they have been, historically, and what may have changed in how they are embraced (or not).

Rashmi Sinha moderated, and only spent a bit of time asking her own followup questions after the panelists finished; and then it was basically an hour (?) of audience questions. Although I stood up and asked a question, I would rather have had more questions from her and less from the audience (that’s my bias, I guess, as someone who’s played that role in the past); the audience questions are not usually about creating conversation, and the moderator is obviously in a role to do that. By the time we finished with questions people were asking about how to recruit participants for studies; a tactical question that had no place in this meeting, if we were there to discuss the practice and how it integrates into corporate America, then let’s not deal with newbie process questions. I’m not minimizing the importance of that question to the person who asked it, but it wasn’t on topic and kinda brought things down for me.

People mumbled afterwards about wanting to see some conflict between the panelists, who represented competitive firms (but maybe didn’t see themselves individually as competitors) and who sometimes expressed different points of view on how to use the tools of user research (it’s hard for me to be specific from memory, but there were several examples from Google about how user research wasn’t always necessary, but they were ridiculous examples, as in, “should we have told people not to build a search interface until they had done years of research” as the strawman questions, when I don’t think anyone was advocating years of research, more so the opposite). It wasn’t in panelists charge to debate what they heard from the others; they were there to tell their own story and they all did that very well.

Perhaps the comments about conflict are proxies for my desire for more conversation; something that (as user reseachers know) takes good questions, and frankly, audience members just aren’t going to ask good questions. This sounds terribly snobby and let me clarify – there are questions that are informational (what type of deliverables do you use? how do you recruit) and there are questions that provoke conversation and interaction.

There’s another panel phenomenon at work here – “question drift” – whoever answers the question first is the most on target; as other answers come from the panelists, we end up hearing about an entirely different question, and we’ve lost the thread. I don’t have a solution to this. Sometimes the drift is interesting, but often it’s just a bit frustrating.

So it was hard to take much specific away from the evening – there was a lot of info; a lot of bits of perspective and insight and jargon thrown out quickly, with something new on the heels, so I felt like it was an immersion more than an education.

But here’s what I took away:

  • this is a mature field; you can see the newer practices (Google) presented as adolescent next to their more wizened counterparts
  • I’ve lived as a consultant for a very long time; there’s a whole set of challenges and benefits that these corporate folks have that are almost alien to me – I felt very aware of how I can deal with some of their situations so much more easily, and how there’s so much more formalized and permanent processes being created that I’m not at all engaged with
  • it’s just tradeoffs and contrast, one isn’t better than the other, we need to be in-house and out-of-house both
  • it’s not clear to me how research is different from design (and I don’t mean Research and Design) – this was my question and I don’t know that I got an answer except a fallback to corporate structures (and one person pointed out that designers and researchers have very different skill sets, but that wasn’t my question – if this is collaborative work, can’t teams of people with complementary skills deliver ONE thing – “design” – rather than breaking it down so much) and formalized processes
  • this is a bit of a hot topic among software/tech/design types right now
  • MORE I REMEMBERED: Christian Rohrer from eBay defined success criteria for user research as impact (which I really liked), and he defined impact as
    1. credibility
    2. consumability
    3. relevance

FreshMeat #8: Everyone Remembers Their First Time

========================================================
FreshMeat #8 from Steve Portigal

               (__)                     
               (oo) Fresh                  
                \\/  Meat

FreshMeat. It’s free as a bird now, so join in!
=========================================================
A lazy journey through mistakes made and lessons learned
=========================================================

It was a hot Toronto summer, late in the 1980s. I was
wearing shiny dress pants and a sock tie, sitting in a
big downtown office tower. I can’t imagine it, but I may
very well have been fresh-faced.

Yes, I was a summer intern.

I worked in the computer support department of a bank
that had offices across Canada. I provided technical
support for anyone who had a computer problem, be it
hardware, software, DOS, what have you. This was
pre-Internet, so there was no way to know anything
about the state of their system except what they were
able to tell you. It was a challenging job, and gave
me a real sense of user empathy.

As part of my internship, I was asked to develop an
application for one client, a woman who was using a
spreadsheet to manage data for her investment
customers.

A spreadsheet, for those that haven’t used Lotus or
Excel (or VisiCalc), is basically just a bunch of
columns of data. Across the top may be headers such as
name, date, opening balance, etc. For example, I use a
spreadsheet to manage my collection of live music
recordings, so I have headers such as band, date, venue,
number of discs, comments. Each row, therefore, is a
different “record” in the database. It’s quite cool
because you can sort it by any field, or look at certain
subsets of all your data (originally, spreadsheets were
described as “what-if” programs).

We decided to move the program from Lotus (a spreadsheet)
to dBase IV (a database program that had the ability to
write programs that would add, delete, sort, search, etc.

The client sent me her spreadsheet (I guess she must
have put it on a floppy disk and mailed it to me) and
I sat down and spent several weeks putting together my
dBase program (to be really trivial, I think we used
something called Clipper that made actual “programs”
out of dBase code). I built in all the great functions.
ADD a new record. DELETE a record. And, the good ol’
standby, CHANGE an existing record.

I did a really nice job. The program offered you three
choices (ADD, DELETE, and CHANGE) which you could
select by pressing 1, 2, or 3. I think I was wise enough
to include a function that would let you quit the program.
When you made your choice, you would see a new screen
that said something like “Enter the number of the record
you would like to delete” and had a little space to type
it in. I’m sure I even had confirmations before deletes,
and feedback to tell you that your record had been added.
All this without a single course in user-interface design!

A few days before the end of the summer, I delivered it
to the client, still never having met her, or discussed
her expectations. I got a very nice phone call a few days
later. She was very appreciative of all the effort, but
she politely informed me that it wouldn’t be much use to
them, because of the way they used their current solution,
the spreadsheet. They would typically scroll very rapidly
through the data, looking for “flags” that they had
embedded – two or three character codes they placed in
front of the customer’s name to help them anticipate
certain actions. My wonderful front-end made that entirely
impossible.

And thus endeth the summer. Many successes and one failure.

But it was to be more than a year before the reasons for
the failure really became clear. Back in school, I
encountered my first course that considered the human part
of software – the user. I was struck with a ton of bricks
when shown that the people who make any kind of stuff
(revolving doors, stairway railings, library searching
software) are responsible for ensuring that their intended
users can actually make use of the thing.

Whoah.

And then my software development experience came into sharp
relief. I had made dozens of assumptions without realizing
it. I had never before grasped my own responsibility to
step outside myself in order to understand how that program
was going to be used. This second ton of bricks hurt just a
little more. But it changed forever how I looked at the
process of designing anything.

Series

About Steve