Reply to topic  [ 7 posts ] 
Formulating a form to rate form. 
Author Message
Doesn't have much of a life
User avatar

Joined: Thu Apr 23, 2009 6:28 pm
Posts: 851
Location: EC1 Baby!
Reply with quote
So in my role at work I'm responsible for quite a fair bit of the training side of things in terms of software, protocols and such - and although one of my first decisions on appointment was to do away with a rather tired and pretty redundant aptitude test, I am obliged to make some kind of (statistic-based) assessment - from time to time - of my colleagues' abilities.

So, this week I will be re-inventing a test (I've already started to) - but in additon to simply collating the point-scores, I want to "means-test" (if that's the proper use of the term) the results so it can accurately account for some people's extra exposure to certain aspects of our software.

For instance :

Person A takes the test and scores 60/100
Person B takes the test and scores 40/100

Person A has had 3 years of use of the software, on 3 jobs, and by the virtue of their exposure to the software, should have scored higher.
Person B has only had 6 months use if the software, on 1 job, so in fact has done pretty well to score what they have.


My question therefore, is what kind of formula/equation/model can I apply to do that?
I'm thinking, since the test will cover areas of software function, (some used and some not in certain projects) there must be a way of saying :

Project/Test categories (A=100 B=100 C=100 D=100 E=100)
Projects (Project 1 = ABCDE / Project 2 = ABC / Project 3 = ACD)

Person A = Project 1, 2 & 3 (=500)
Person B = Project 3 (= 300)

Person A = 60/100
Person B = 40/100

Person A = ((60/100)/500)=0.0012
Person B = ((40/100)/300)=0.00133


Thus proving Person B got the "better" result.

It's not a flawless method - and I would certainly present both sets of results (the raw score, and the moderated one) but without complicating if further, do you think it a fair system?


Sun Nov 20, 2011 11:48 pm
Profile
I haven't seen my friends in so long
User avatar

Joined: Thu Apr 23, 2009 6:58 pm
Posts: 8767
Location: behind the sofa
Reply with quote
It's an honourable objective, but absolutely impossible to realise in a way that will be seen as "fair" by everyone.

You'll need some kind of cap on the accumulative knowledge indicators, so that the most experienced staff don't score unduly low. I imagine it would depend on the category - so a simple category might be capped at 50 points expected to be reached after 1 project, while something more complicated might be capped at 200 points after 5 projects.

The problem is, all these figures will be entirely subjective and very difficult to justify.

_________________
jonbwfc's law: "In any forum thread someone will, no matter what the subject, mention Firefly."

When you're feeling too silly for x404, youRwired.net


Sun Nov 20, 2011 11:58 pm
Profile WWW
I haven't seen my friends in so long
User avatar

Joined: Thu Apr 23, 2009 8:19 pm
Posts: 5071
Location: Manchester
Reply with quote
If you're using your subjective opinion regarding how good someone should be in the formula, why not just cut the mathematical bollocks out and make the decision of how good someone is based on intuition in the first place?


Mon Nov 21, 2011 12:04 am
Profile
Doesn't have much of a life
User avatar

Joined: Thu Apr 23, 2009 6:28 pm
Posts: 851
Location: EC1 Baby!
Reply with quote
JJW009 wrote:
It's an honourable objective, but absolutely impossible to realise in a way that will be seen as "fair" by everyone.
Well personally I don't much care about "being fair" when it comes to a work environment. I used the word in place of perhaps something more apt, like "just". We are a business and we need to remain competitive in the market - and one of the best ways to do that is by having better skilled staff - because dropping fees can only go so far.

Fairness is an odd one. If everyone takes the test, one could argue that's fair - but then given the differing levels across those sitting it, is it really? I'm not writing different tests for different ranges of (perceived) aptitude. I want a level benchmarking process, and a level playing field, with an idea to set a barso I can identify shortcomings and strengths. It's all about finding out what (and who) I need to focus my training sessions on.

JJW009 wrote:
You'll need some kind of cap on the accumulative knowledge indicators, so that the most experienced staff don't score unduly low.
Good point - I would probably score horribly with my "system" as it's presently proposed - and "we" can't have that! :?

JJW009 wrote:
I imagine it would depend on the category - so a simple category might be capped at 50 points expected to be reached after 1 project, while something more complicated might be capped at 200 points after 5 projects.
Well I've so far got a grading in the test itself - with each category being broken into basic/intermediate/advanced level tasks - that I've so far weighted to score 1/2/3 respectively. But certainly a "cap" would make a degree of sense - although I'd planned for the categories of each project not to accumalte anyway (ie. you only count "A" once)


JJW009 wrote:
The problem is, all these figures will be entirely subjective and very difficult to justify.
I wouldn't say the figures are subjective - they're effectively yes/no checks (ie. you have or you haven't done it = so you should or shouldn't know it) but seeing as yes/no checks need to be converted to a figure if to be applied to an equation I simply thought "yes=100". I actually use said software to write a lot of conditional statements, and plan to use it/them to calculate the results* of each test file.

*I was also thinking of writing an easter egg into the test along the lines of "Edit the formula of the test score to report "I know Revit back to front" as your test result" :)


leeds_manc wrote:
If you're using your subjective opinion regarding how good someone should be in the formula, why not just cut the mathematical bollocks out and make the decision of how good someone is based on intuition in the first place?
I totally get where you're coming from Chris, and certainly until now that's how I have done it (ie. when asked who'd be best paired with you on what) - but we're reaching a point now where we HAVE to start working more efficiently - the software is by no means cheap (around £5K a head) and the training were it outsourced even less so (about £2K a head for basic introduction) - so "the money" want to see what they're getting for the investment - and since I'm directly responsible for reporting on the ROI - and feeding back to HR on who's hot and who's not - AND opinion counts for nothing, only evidence and fact, I need a measure.


Mon Nov 21, 2011 1:22 am
Profile
I haven't seen my friends in so long
User avatar

Joined: Thu Apr 23, 2009 6:58 pm
Posts: 8767
Location: behind the sofa
Reply with quote
snowyweston wrote:
"the money" want to see what they're getting for the investment - and since I'm directly responsible for reporting on the ROI - and feeding back to HR on who's hot and who's not - AND opinion counts for nothing, only evidence and fact, I need a measure.

I've encountered similar systems a couple of times. Being quite outspoken (I hated my job and thought the management were incompetent) I spoke out at one review meeting in front of all the other staff. It went like this:

Me: "Your method of measuring engineering efficiency is farcical. It demonstrates a total ignorance of statistical techniques and actually penalises the best engineers because of facts X Y and Z. How can you justify using such a perverse system?"

Them: "Because we have to measure something and this is all we could come up with".

Everyone just shook their heads and gave up. "The Management" left soon after, and the following years saw mass redundancies.

Is there any way you could include anonymous peer review?

_________________
jonbwfc's law: "In any forum thread someone will, no matter what the subject, mention Firefly."

When you're feeling too silly for x404, youRwired.net


Mon Nov 21, 2011 1:43 am
Profile WWW
I haven't seen my friends in so long
User avatar

Joined: Thu Apr 23, 2009 8:19 pm
Posts: 5071
Location: Manchester
Reply with quote
IMO I think you have to say that if experience makes them a better employee, then they're a better employee - if a more experienced person beat me in a test, and I had enough time to prepare for the test knowing the calibre of my opponents, then it would be fair if they beat me. I think only if it's a close result would some subjectivity come in to it, and I wouldn't have thought you'd need a formula to see a fantastic effort on the part of a rookie, or whether they just threw in the towel..

Wouldn't a bonus pool system be unfair for the experienced employee as well in effect as it devalues their knowledge where both of them have the same (potentially excellent) level of knowledge.


Mon Nov 21, 2011 1:46 am
Profile
Doesn't have much of a life
User avatar

Joined: Thu Apr 23, 2009 6:28 pm
Posts: 851
Location: EC1 Baby!
Reply with quote
JJW009 wrote:
Is there any way you could include anonymous peer review?
I've been pushing for one for ages as part of an anonymous staff satisfaction survey - but so far "they" don't think it appropriate, and worse inconsequential. :(

leeds_manc wrote:
IMO I think you have to say that if experience makes them a better employee, then they're a better employee
On-project experience is definately a bigger factor - but it's not really in my remit to grade that.
leeds_manc wrote:
if a more experienced person beat me in a test, and I had enough time to prepare for the test knowing the calibre of my opponents, then it would be fair if they beat me. I think only if it's a close result would some subjectivity come in to it, and I wouldn't have thought you'd need a formula to see a fantastic effort on the part of a rookie, or whether they just threw in the towel..
What I've failed to allude to so far is I have little doubt a great number of our (inexperienced) rookies will cakewalk the test, it's the (experienced) staff I'm having the real issue with - and it is in my remit to measure efficiency - but how the variance betweeen experience and efficiency is interpreted would be left in the hands of others.

leeds_manc wrote:
Wouldn't a bonus pool system be unfair for the experienced employee as well in effect as it devalues their knowledge where both of them have the same (potentially excellent) level of knowledge.
True. Hmmm. :?

Thanks for the feedback guys - it's giving me a lot to chew on. ;)


Mon Nov 21, 2011 8:55 am
Profile
Display posts from previous:  Sort by  
Reply to topic   [ 7 posts ] 

Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
Jump to:  
cron
Powered by phpBB® Forum Software © phpBB Group
Designed by ST Software.