Tenkey layout is something many keyboard users learn.\
breaking that seems like a bad idea.
drop the trackball and the pad, most folks that need a small kb really only want the kb anyway.
Loading User Information from Channel 9
Something went wrong getting user information from Channel 9
Loading User Information from MSDN
Something went wrong getting user information from MSDN
Loading Visual Studio Achievements
Something went wrong getting the Visual Studio Achievements
It's not entirely the same. More precisely, it would be someone at Ford singing Lady Gaga lyrics to the same melody. That wouldn't include any of the actual voice work, the musical background arrangement, the instruments or the time of the musicians in creating the background, sound recording, post processing, publishing, or distribution.
But the artist makes the song once and sells it a million times. A million lattes requires a million times the labor, electricity, food costs, management, etc. Surely you would say that a barista making a million lattes over their lifetime deserves at least $5 million, compared with the artist's $2 million from a few weeks' total work?
as I read the article the issue raised was did the folks who filed the copyright own the work in the first place. whole different thing.
if the "creative work" predates the birth of the entity that filed for copyright then how do the claim to own the rights to that work? can they show that they purchased the rights from the author ?
if not then they have no claim.
I would say that the rom is "Data" for the application not code.
that may sound wrong but really it is just that.
also most of the early rom systems had very very limited function and that any decent emulator for them is running in a "sand box" that will not have any direct way to access the native OS, the network etc....
even if you emulate a C=64 the points where it could access the "real world" are limited to a few IO systems that you can make safe.
most of the C=64 disk system would need to be emulated for a number of it's special functions to work.
@figuerres:It is really meant as an installation option for data storage.
Opt 1) Store local on MSSQL and not have worry about internet issues but you better have a good back up and recovery plan.
Opt2) Store in Cloud (Azure) and you only have to worry about re-installation of app and configuration but, latency becomes an issues and you are dead when internet is down.
Everything crashes at some point, I am just trying to come up with the best design choices in terms of recovery.
does the program have one client using it or multiple ?
what are the needs of the app and the data for security ?
if a customer runs SQL server on a local pc then they may be leaving the data very open to all kinds of hacking / editing etc....
if you have a sql store for one client app then possibly you can not even use a sql server.
I have done local data with LINQ in .net and serialized the data to disk, small simple and you can back that up easy also.
but that is if it is the right fit for the needs.
how many clients to a server ?
do they need to get data only at one location ?
I think first you should make clear what you are trying to do...
"Local copy" or a connection.
what do you really mean by that ?
how does the software connect to the data ? by a sql connection to a server ?
would "Local Copy" mean a server on a laptop when there is no internet connection ?
or a sql server on prem that the client has to manage and all the pc's connect to that server ?
in general I tend to first have all the clients connecting to a "Web Service" and never to an actual server name and port.
if a pc needs "offline data" then you address that as a pull of select data stored in a local cashe and later you can upload new or changed data thru a web service.
if the client never knows about the details of the backend you can make some changes w/o messign with the client.
then does the web service run at a hosted location or in the local office. and the answer for that is it depends on when and where they need to get connected to work.
It's a mess and yes the reasons run deep in how all kinds of apps have been written with developers thinking for the most part that we are still back in the 72dpi bitmap world.
i hate and love when read of changing font size when they get a high rez screen.
really the size of a 72 point font is 1 inch. that has zero relation to the ppi of the display!
but then we create a form based on pixel sizes and set the text based on point sizes in most cases..... creating a built in failure when that relationship changes.
also this is one of the things MS was trying to get us away from when they first created WPF and vista.
do the display based on drawing with math not on counting pixels!
make form sizes base don inches or centimeters or points not on pixels
same thing for buttons and other UI elements.
only use a bit map to show a picture of something, not to create a UI element like a button.
we need to do this kind of thing all over the code base, bottom to top.
if the display is based on the math then you can scale it to fit the display area and it works.
in fact i have used a wpf control to scale forms and shown how it can work very well. in a wpf window with wpf markup inside the container.
It's kind of obnoxious that the billing doesn't reflect the actual 'cost' in any meaningful way. The difference in cost of sending 100 bytes vs 1000 bytes isn't an order of 10 no matter what kinds of costs you try to include, and yet providers of all forms try to convince us that there is a correlation. I get that it's hard to keep everybody happy when it comes to billing, but every 'reason' for billing to be the way it is can be summed up by 'because we can'.
well there are some sunk hardware and network costs to scale up the transfer but that is a kind of "one time cost" that should be allocated across all of the customers and the time it will be in service.
but also of note is the way fiber has tended to get more data thru the same fiber every few years by better encoding of the light signal.
that has kept the backbone providers from having to dig up more trenches and adding more fiber to grow to meet demand as much as if they were using copper to connect.
there are a lot of details in how it all works and yes by and large it's all about how much money you can get w/o spending more money.
From what they have stated that does seem to be the norm still today. What doesn't seem to be the norm though (at least in this case it sounds like it isn't) is when the network traffic is out of balance the affected party should bill the provider on the other side of the edge. That provider then in turn bills up their stack. Apparently that isn't the norm today I gather.
back around 1996-2000 when I dealt with more of this stuff there were a number of billing formulas used to bill a customer, generally based around finding the "normal peak" use for a month. but what I dealt with was ISP to user or ISP to web site; not Backbone peering.
I suspect that part of the problem is that Verizon has this business under the "FiOS" brand to sell cable tv to subs and Netflix may be seen as taking from that revenue stream.
not to be "captain obvious" but that is part of the thing...
also adding complexity is that they are not one company, by the FCC rules they have multiple corps. so there are at least :
also if the internet peering billing is done the way they used to do Telco peering it's based on even sharing of traffic and getting the other peer to pay more if they send more.
at one time Telco billing used to be such that if you "terminated" a call from another Telco then that Telco paid you for finishing the call.
but I bet that Netflix does not want to do that as it would cost them way to much.