Scripts are in for Event #1 and now the power is in our hands!
CrowdScoring is a major change, and we can be sure that it’s going generate some very interesting results.Last year’s sudden influx of contestants caused a major strain on the official judges, and displayed a tendency toward overscoring (the top half of the leaderboard seemed to vary in quality far more than the point difference would have indicated).
So a crowd based filtering system has been installed, the multitude have been unchained, and the Muppets are running the Nursery. It’s going to take a while to get used to.
So here are some observations , and a few suggestions to adopt or confront.
– The bias has definitely shifted in favor of underscoring. Even mikefrobbins technically flawless entry is hovering just above 3 as I write this.
– The score gap between Great Scripts and completely broken ones still remains close… hopefully that will change as the community scores come in.
– Many people are not making any pretense of testing the scripts they score, and are scoring generously on scripts that are functionally broken. (If you work your way down the leader board several scripts that do not successfully achieve any of the event objectives are ranked ahead of ones that achieve every objective.)
So… Now some suggestions to take into account as we participate in scoring:
Does it run? at all? really could it work as posted? If not is it really worth more than 1 star?
Yeah… I gave 2 stars to such a script but that was early on and I hadn’t made the above observations.
Could a Monkey run it? If a big banana-flavored button was wired to this script/function would it move .log files (that are more than 90 days old) from C:\Application\Log to \\NASServer\Archives every single time?
If it does NOT then your score should reflect that.
I gave the might Mjolinor a 3 because he failed to give default values to his parameters, even though his script had some very elegant usages in it. It pained me, but to me this is a primary scoring criteria.
Please make useful comments, and be specific. The character limit is tight (possibly a bug?[Edit: DonJ has fixed this!]) so use a -1 +1 notation to help emphasize teachable points.
Should I run it on my machine? If the script is truly a monstrosity why waste your time fixing whatever it might break. I haven’t seen one yet that I’m that afraid of, but if I do it get’s a 1 instantly.
Test by Running. I personally don’t want to give someone a bad score without running it once to see where they went wrong… or give them a chance to prove me wrong.
Here’s the tests I do:
- Paste it (as is) into a new ISE session
- Glance over the color coding that ISE provides make sure nothing jumps out as weird
- Run it with appropriate parameters
- Verify results
- Reverse the change by swapping the source and destination parameters.
- Verify results
- Run it with inappropriate parameters (specify a non-existant drive, give a non-integer days value, if they say it accepts piped input is there an obvious deadly matchup, etc.)
- Watch the fireworks.
Once I’ve done enough of the same event to recognize the trouble spots I doubt I’ll run every one, but I’ll certainly give the best and worst candidates a thorough treatment. I learn a lot that way.
Set up a test environment as you develop your script, re-use that as a scoring verification environment.
If you’re paranoid do a temporary VM, otherwise structure your environment to create a bit of a sandbox. (I like doing my testing as a non-admin user and nest any testing folders inside of that user’s profile)
My testing for event 1 looks kind of like this:
dumped a wide selection of files from my temp folder into \app1 \app2 and \app3
ran the function
(example: move-logfiles -source c:\users\rambler\dev -archive c:\users\rambler\dev2 – days 90 )
and followed immediately with ( gci c:\users\rambler\dev2 -file -recurse ).count to see how many files moved
maybe run it again with the exact same parameters and check again, if more files moved the second run then they’ve just lost some points
Teach people how to work with objects (especially date objects). If adddays() or some other specialized method is not used then they’ve failed to learn a very key lesson… and obviously didn’t properly test their product before release.
Be firm. If you find it hard to score someone low for a failed attempt try this thought experiment: What would it cost me if someone released this “tool” in my environment.
Be open minded. Don’t score someone poorly just because you don’t understand what they did. Study it and see if they’re completely right in a way you didn’t expect.
It’s only fair that I link my own entry for your personal Inquisition. (if the link is still troubles Ctrl+f Rambler on the voting page)