Intelligent filters (including "digg" moderation systems): Bad idea. It only leads to censorship. Dissenting views, i.e. views not popular with the community at large, are not bad, nor are such posts trolls. A troll is something else entirely. Any attempts
at community moderation like this will result in legitimate threads, threads that can and do turn into meaningful conversations, will get buried. It removes the choice of the individual and puts the choice in the community. That's not what I want.
Big ban hammers (aka the community or too many people the authority to ban): Also a bad idea. For a lot of the same reasons as above. But worse, malicious users can find ways to game the system and ban other users, including users the community at large appreciates.
Again, keep it simple. Let me truly ignore the troll. If enough people ignore someone, or other attempts to "game the system" by users occurs, there should be some active moderators with the authority to take the user out.
In the real world, if someone wants to make a speach of dubious quality in a public place, police (not citizens) make sure they don't cross a line, intelligent people can just walk away, and when a line is crossed police (not citizens) can arrest the individuals. Works there, it can work here as well.
Intelligent filters (including "digg" moderation systems): Bad idea. It only leads to censorship. Dissenting views, i.e. views not popular with the community at large, are not bad, nor are such posts trolls. A troll is something else entirely. Any attempts at community moderation like this will result in legitimate threads, threads that can and do turn into meaningful conversations, will get buried. It removes the choice of the individual and puts the choice in the community. That's not what I want.
Social networking doesn't work. There was a great article on why and what could be done about it, but I don't have time to find it again. The problem is that once the community reaches a certain size, the statistics for who's viewing what become meaningless. The article suggested limiting the community to only "friends", but the calculations required to make this work are tremendous.
I agree, keep it simple.
1. Require some sort of identification when joining. E-mail accounts are probably enough, though that's easy enough to create that a determined troll can do it. However, it does take some effort, and one would hope the other points here lead to it not being "worth it" to the troll.
2. Implement per-user (not shared) ignore lists.
3. Posts by an ignored user aren't seen.
4. Responses to an ignored user aren't seen.
5. New posts with quotes from an ignored user aren't seen.
6. Members have a unique RSS feed that doesn't contain entries from ignored users.
Now a troll shows up (or just someone who you find irritating enough you don't care to see them). You ignore them. If other's "feed the troll" the user continues posting, but you aren't bothered by him. If everyone (or enough) ignores him, he gets bored posting. Worst case scenario he spends the time necessary to create a new account (remember the first bullet point... the harder you make it to do this, the less likely he is to do it... though if you make it too difficult you'll scare away legitimate people as well). You easily ignore him again. Rinse, lather, repeat until he gives up.
I don't want to prevent anyone from being able to assert their right to "free speach". Banning is the wrong answer for so many reasons. But I want the ability to ignore them. In a forum such as this, I can't do it with out some assistance, so give me that assistance.
Dr Herbie wrote:
I didn't say 'not enough has changed' I said it feels like less of a change than the previous one. A lot has changed under the hood, but that's kind of my point; for most users the UI is the application. The UI changes from XP to Vista are more a refinement and less of a radical change. Radical change is what gets users excited even if it's mainly the change from grey default UI to blue default UI that happened with the Win2000 to WinXP change (it was like going from black and white TV to colour for a lot of people, so the impact was big).
I don't feel that Vista is a completely new experience. Running Aero is still less of a change from XP than XP was from Win2000 from my perspective (perhaps it the addition of colour in XP that made it feel so different for me). Changing the shape of the Start button and adding transparency is not different enough That's obviously subjective. Perhaps Vista + Office 2007 would feel radically different, but I don't use Office much any more.
Now WPF could lead to radical changes in the UI, but MS decided to to implement too much of this in the OS (probably didn't want to scare users off).
I use XP at work and Vista at home on similarly spec'ed PCs. I don't get any kind of jarring experience moving between the two. I tend to use much the same software on both PCs (IE7 and Visual Studio 2005 are my personal mainstays), so that might explain why.
While the search functionality of Vista is useful, I had WDS on XP. I tend to use the same set of applications over and over, so they're always right there on the start menu anyway without having to search. I don't tweak my system much so I hardly ever see the control panel. Gadgets are nice, but if I'd really wanted them I could have got them for XP from 3rd parties.
Vista might be more responsive, but I got a new PC anyway so even XP would have been more responsive.
Really the only reason I got Vista was that I was due to get a new PC anyway, and as a developer I wanted to be able to keep up with the new stuff. If I wasn't a developer I might not have bothered. Most users are not developers.
All of the changes I mentioned are not "under the hood" (well, the driver thing is, but it's also very evident when you have a driver crash). And I can't agree even if we're only going to talk visuals. Transparency is a much larger change than simple color. And if there, the graidients used in Vista are radically different from the solid colors in XP. The chrome is cleaner and more professional looking than XP. Remember, XP was derided for having a "crayola" style when it came out. Many of us grew to like it, but early adopters hated it. Do you hear any early adopters complaining about the style used in Vista? That's a radical change.
Guess we'll have to agree to disagree.
See, I don't care if people aren't interested in Vista. I have no personal gain there, and I'm not religious. I just don't see the argument that "not enough has changed". To me, everything has changed.
Dr Herbie wrote:
Shining Arcanine wrote:
Perhaps I am reading into things too much, as regardless of the size of the market, Microsoft is making more money than it did in the past and people I know who have shares in Microsoft will likely see them grow.
I expect that's how MS see things: number of units sold may well be less important that revenue generated.
The way I see it is that Vista doesn't 'feel' as different from XP as XP did from Win2000/Win95. Because there is less of an obvious difference people won't rush to change, so I would kind of expect uptake of Vista to be slower. It feels more like MS is laying the groundwork for larger changes in the future (or they've completely lost the plot and added a load of random stuff )
I can't agree. Vista, to me, is a completely new experience. The visuals are radically different. Most (not all) of the UI has been cleaned up nicely in a usability point of view. Security is very different (I like it, but I know the arguments about UAC, unfounded as they may be it's still a prevailing opinion). The OS seems more responsive in many areas. Bad drivers, such as video drivers, don't bring the system down. The start menu/orb/pearl/whatever is much more useful. The sidebar is the first one I've seen that I like, and for many users it will be the first one they've ever seen. XPS is handy.
I could go on, but the point is, after using Vista I hate going back to XP. So like it or not, I don't see how anyone can claim "not enough has changed". There's probably more changes in Vista than there were in XP, though to be fair I remember a lot of people making this same argument back then. It lasts about a year, then no one seems willing to go back.
I agree to an extent. I'm not likely to actually leave... I'm a stubborn cuss and still like what C9 has to offer. But I'm also completely fed up with about half a dozen idiots on here, and want a way to "make them go away" yesterday.
wkempf wrote: But in general, this is something you shouldn't think about until AFTER you've got the program functioning and have proven there's a performance issue.
I would respectfully disagree with that last statement. I think it is prudent to consider up-front performance implications on a particular pattern.
80/20 rule. Worrying about it up front ensures you're going to spend time optimizing the 80% of the code that has no impact on the performance.
Not to mention, if you're interfaces are designed correctly, this low level detail should be trivial to change based on real need discovered during testing.
I've worked with micro-optimization nuts. Their code never ran any faster, but they spent a lot more time writing it. I'm not suggesting you totally ignore performance, just that you understand the 80/20 rule and why "premature optimization is the root of all evil".
Cornelius Ellsonpeter wrote:
Minh wrote: I think stacks, by default, are tiny... like 8K or something like that.
But my C++ is rusty. How do you dynamically create stuff on the stack?
I know a new Class() goes on the heap, so can you dynamically create structs?
I have a simple philosophy. Get it working first. Worry about performance later.
something * something_ptr = new something;
You're right though...it probably matters more that I get the fool thing working first and then worry about speed increases. I was just trying to save myself some trouble down the road...and maybe nowadays it isn't so much of a speed issue anymore.
That puts it on the heap, not the stack.
The stack is limited. You can change the limit by changing some link settings when you compile, but in general you don't want to put too much on the stack.
Once an object is allocated, accessing it will perform the same if it's on the stack or the heap. The expense is in allocating/deallocating from the heap. Given this, there's all sorts of allocation schemes you can use to increase the over all performance of your application, but they are very dependent on several factors you've not given us details of. For instance, is the size of one of these "structs" fixed or dynamic? Will you create them once and keep them around for the life of the application, or will they be created and destroyed frequently? And so forth and so on. But in general, this is something you shouldn't think about until AFTER you've got the program functioning and have proven there's a performance issue.
JasonOlson wrote: "free as in puppy, not free as in beer,"
This, IMHO, is the quote of the day!
And just like a puppy, it eventually ruins your shoes and soils the carpet...
Again, as offensive as the trolls on the other side of the aisle. Jason was being reasonable in his post, and the quote was funny. But turning that into "it eventually ruins your shoes and soils the carpet" isn't constructive, isn't reasoned, and is offensive.
Can we please stop the FUD. Let's discuss on a technical level.
Tom Servo wrote:Load of crap. Unless an open source project is backed by a huge company, the product coming out of such a project is about as far away from usability as the sun is from the galactic center. Nice superior ways, if the only stuff resulting of it requires expert knowledge to handle.
That's as ignorant and offensive as the trolls on the other side of the argument. Apache, BSD, Linux kernel, Tomcat, Boost... I could go on and on naming OS projects that are highly successful, very usable, and in general of superior quality. However, just like with closed source software, I can name a lot of really awful OS stuff to.
The methodologies are different. Each has their pros and their cons. Neither is generically better or worse than the other. It depends on your needs, the people doing the work and several other factors.
So Linus prefers OpenSource. Great. More power to him. At least I know he came to his decision based on experience and technical merit, not on some "religious" or "political" agenda. Me, I work on both sides of the aisle, using products from both sides. I pick what's best for a specific need, and the methodology used for creating it be damned.
wkempf wrote: OK, now you're at least trying, no matter how unsuccessfully, to argue technical points. That's a step in the right direction, but so far you're proving nothing but that you don't know what you're talking about.
Windows has the concept of an "excutable bit". On 2K3 machines, for instance, when you download an executable and attempt to run it, you'll get a security dialog preventing it from running unless you tell it to (and which point you can permenantly set the bit). I'm pretty sure Vista is the same. So, you're ignorant about Windows. You're just parroting what you've heard.
Then on the Linux front, you're also parroting what you've heard. Hardened linux systems, and Ubuntu might be one of these, may act as you describe, but most distros do not. And I'd argue that such a distro isn't really usable as a desktop, any way. An end user wants to be able to make choices about what they run and what they don't, and do so easily. Warning them that they are doing something potentially dangerous is a good thing. It gives them the knowledge to make an informed choice. But making them only run "approved" and signed applications, or to dig down into the internals of the OS in order to bypass the security, is simply going too far with no added benefit. It's like MS's activation schemes meant to stop pirates. It doesn't stop the pirates (or the clueless user), it only makes the legitimate (or informed) user pay a heavy price.
People are giving UAC a bad rap, but what you describe is 1000 times worse.
Ubuntu isn't a hardened "distro", it just a secure distro. It gets most of it's security from the Debian operating system, which inspired many other operating systems. Ubuntu is by far (looking at statistics), the most popular desktop distro, and many times "Linux" and "Ubuntu" are used to describe the same thing (check Digg). It is also the one I know most about. You can say "well Damn Unsecure Linux" does it X and X way, but I can't really confirm or deny that because I never tried it. Windows 2003 is a server distro, so it's irrelevent for this specific aspect of security.
Anyone else noticing how he consistently fails to address anything meaningful in his responses?
Who cares if Ubuntu is the most popular. Who cares if a lot of clueless people say "Linux" when they mean "Ubuntu". I didn't say anything about either of these points, and they relate to what I said in no way.
Yes, 2K3 is a server OS, but Vista isn't, and I said I was fairly certain it behaved the same way. Not to mention, the capability here isn't something unique to 2K3 any way. Even on XP Home I can make an EXE non-executable. So with regard to this specific point you tried to make, Windows isn't technically inferior, they just made some bad choices about default behaviors. You won't find anyone who disagrees there, but Vista has addressed these concerns. It's now making the same (or in some cases better) default decisions as all of the OSes you want to claim to be secure. But your response to this is just "Windows is insecure at the software level", which has never been the case and you've failed to provide any evidence for.