I think this could be a valuable service, but $99/test seems quite high (compared to similar services targeting desktop instead mobile users). At $500 to run Nielsen's recommended 5 person test, running my own tests starts to look like a viable substitute for your service (considering I'm likely to want to run more than one round of tests).
I think this is true at both ends of the scale: Startups will compare this cost against guerrilla testing techniques (e.g., sourcing testers on fiverr, asking people on the street, or inviting them into a lab for a hour for a small reward). Large firms will compare the value of this service to hiring a more high-touch UX firm (or building this competency internally) that does much more than record test sessions.
For what it's worth, here are some questions I have as a potential buyer:
* How long are the test sessions in general?
* How are the tests "facilitated", or more to the point, what happens if testers don't understand the task instructions or can't get the app to run?
* How are you sourcing testers? Can you quantify their demographics? Can you provide testers that meet specific demographic profiles?
Thanks for the feedback @spokey. What would you consider a sweetspot price-point for you at the 5 person test? We discount down the more users you add to the test: 3 at $249 and 5 at $399.
As for your questions:
1) Generally 10-15 minutes depending on the tasks assigned.
2) We facilitate the tests however we try to have the process be as natural as possible. One thing we learned is that it can be incredibly helpful to sit down a new user in front of the app. Things we assume as developers aren't always obvious to the average person. You'll see in the recording the steps a normal user would take so you as a developer can fix the process.
3) For the time being we are picking the demographic for the buyer to keep it simple. We'll be adding the ability to choose specific demographics later as demanded.
I'm a little confused by your question. Are you looking to spend $100K or to generate $100K annually? If the latter, the company is probably "worth" 2 to 5 times $100K.
either is okay as far as numbers are good and i dont have to spend lots of time ... if the purchase price is $100k then i expect it to make around $5k/month.
I hope this doesn't sound condescending because I'm very, very far from an expert in this field, but you might want to spend some time researching conventional methods of business valuation. I'm pretty sure your expectations are a little off here.
I'm not sure it's obvious that PaaS is "winner take all", but even if it is, right now there are several deep-pocketed companies fighting to be that winner. Sounds pretty good if you're looking for an exit.
I'm a couple of years younger than Julian Assange and commonly use two spaces after each sentence, purely out of muscle memory. (I took a "keyboarding" class in 1987 or so using an IBM Selectric typewriter and that's the way I was taught to do it.)
I'm not at all surprised that many people still do this. I'm a little surprised that people assert that's a "proper" way to do it or for that matter get worked up enough about it to write that you should "never, ever" do it. Maybe it's the engineer in me, but I was expecting an actual negative consequence of using two spaces. Typography is important, but I'd guess that with modern kerning techniques the difference between two spaces and one is slight. (In fact, if I were designing a word processing app, I'd think this is is a case I'd account for: If there is a "proper" distance between the period at the end of one sentence and the start of the next letter, wouldn't you ensure that's the distance that is used whether the user typed one space or two?)
Yes, LaTeX ignores anything more than one whitespace in the source. It does, however, automatically insert extra space after a period. (Unless you disable it with the ~ symbol, e.g. after the word Mr. or e.g.)
I'll suggest that the other stuff causes enterprise software to be of low quality.
Not all enterprise software sucks, although much of it does. Often there are non-sucky options in the space, but they have less market share in no small part due to that other stuff. On the client side of things, if you can fix the politics you can generally fix the suck but that means as a supplier your success will be determined (in part) by how well you help the client address the politics and "other stuff". Sucky enterprise software exists (in part) because many suppliers have found that there is greater return on focusing resources on the other stuff than on focusing resources on improving the software.
They are the same. It was meant to draw you out. It seems to be working okay. In reality I need only a great personality for the how-to A/V. It would be great to work beside a quality hacker, especially if this hacker is head-over in love with the world and all its deep flaws. Yeah?
> If you only give someone a public key,
> how can they authenticate you based
> on that? You must be giving them some
> piece of private data, or else anyone
> could authenticate as you.
That's not quite the way it works. There is no shared secret required. With my public key you can create an authentication challenge that allows you to validate my identity without ever seeing my private key (or, for that matter, you can send me a message that only I can read).
Yeah, I figured out what he was getting at a moment after posting, so I edited my post with my core objection, which is just that there's really no benefit I can see to doing that instead of using memorizable passwords.
If I'm using it right, it looks like more than 7.5 million papers were submitted to PubMed in this 10 year period. Only 0.01% of all papers were retracted and only 0.003% were retracted due to "fraud". This seems like an exceedingly small sample. I wish we could see more of the raw analysis (p-scores and the like).
I think the other result was more interesting: 53% of "faked" research papers were submitted by "repeat offenders", while only 18% of "erroneous" papers were. If you do this once it seems you are likely to do this again.
I haven't been active with Java for more than 5 years, and I haven't been active in the ASF for longer than that so I may be missing some of the nuance here, but isn't it possible maybe even probable that this is simply the point where Java and/or the Java VM "forks" into Oracle's proprietary version (likely to die on the vine) and one of more versions from some combination of IBM, Apache, RedHat, Eclipse, etc.
It seems like there are a large number of Java developers, a large number of organizations heavily invested in the Java stack, and a number of vendors with both the capacity and potentially the motivation to serve those organizations.
I find it hard to imagine that if developers and organizations abandon Oracle's Java in droves that no one will make an effort to fill that void. (And fill it with something more "Java-like" than not, at least at the start.)
Absolutely. Just because Oracle owns the test suite that "certifies" Java does not mean it won't go on. There are a lot of heavy hitters that are stakeholders, and they are very incentivized to protect the stack.
> versions from some combination of IBM, Apache, RedHat, Eclipse, etc.
To expand: don't forget about VMware (SpringSource), JBoss, Azul Systems, Qualcomm, Siemens, Hitachi, Mitsubishi, and, of course, Google (Android).
Oracle will hear from them, as it has seats on most of the major orgs, including OSGi, where most of the Telecom players live.
What good is Java if there's no longer a spec to help ensure cross-platform compatibility?
Also, there are major trademark, patent, and copyright issues that need to be settled before anybody can safely fork Java and stop adhering to the spec.
"Mostly compatible" (at the source code level) would be good enough for quite a few applications. For example, take a look at a lot of the C and C++ applications out there, which use OS APIs that often vary widely. Despite that, C and C++ applications are still written, and very popular.
The problem (as I understand it) is that without passing the TCK (which Apache can't get without field-of-use restrictions), you don't get the patent license required to safely run Java in a business environment. I don't think the line-of-business app developers that have driven the bulk of Java usage over the years can afford to risk getting sued by Oracle, so they can't really depend on a non-licensed runtime.
I thought that went the other way: you need a patent license to author and/or run the TCK. You can create a Java compatible VM (many have) but you can't run the "official" compatibility test without Sun's, now Oracle's permission, and there is some concern (at least at Apache) that you cannot easily create a "cleanroom" implementation of that TCK. (Although I think that latter concern is more social than legal--if I remember correctly conventional wisdom said that there were too may Sun folks already involved with Java projects at Apache to have any reasonable expectation that a cleanroom would remain clean.)
> Also, there are major trademark, patent, and copyright issues that need to be settled before anybody can safely fork Java and stop adhering to the spec.
One would certainly have trademark issues with the name Java, but I'm not certain about the rest of that statement.
I think the only real issue is around the TCK, but that's just blessing an implementation as "Java Compatible(tm)", or at least I think so. Couldn't one take one of the existing JDK/JRE implementations, rebrand it IcedTea, and create their own IcedTea compatibility test?
I think this is true at both ends of the scale: Startups will compare this cost against guerrilla testing techniques (e.g., sourcing testers on fiverr, asking people on the street, or inviting them into a lab for a hour for a small reward). Large firms will compare the value of this service to hiring a more high-touch UX firm (or building this competency internally) that does much more than record test sessions.
For what it's worth, here are some questions I have as a potential buyer:
* How long are the test sessions in general?
* How are the tests "facilitated", or more to the point, what happens if testers don't understand the task instructions or can't get the app to run?
* How are you sourcing testers? Can you quantify their demographics? Can you provide testers that meet specific demographic profiles?