More droids and droid strategy

Here are some more thoughts on automated strategic droids. In a science fiction context, this can make stuff seem more "real". However, in the real world, it will probably be quite a while until we entrust droids with strategy and combat to this extent.

We've previously discussed how independent or centralised a droid should be under certain circumstances (or have we? I don't remember, and I'm far too lazy to look it up). However, it seems likely that a combination of the two should be considered.

I feel as though a decent number of computer scientists would try to go for a 'cloud', where each droid would both calculate its own choices and contribute to a consensus result and go from there. It certainly sounds rather organic (as it were). However, centralisation and decentralisation both have their advantages, and should be used appropriately. There is also the weirder case when the droids are commanded to act without communication for a while, meaning they are under central control but cannot communicate with said control until some parameter is met (for stealth reasons). While an AI might come up with a solution, we can lay out some basic rules here for when to use either rule.

There are some cases where using decentralised control is more useful. Where time is of the essence, and communication takes much more time than processing, then a unit should be decentralised. In this clear case, we will call what the unit does a "reaction". The size of a unit could be a single droid, or a platoon. In these clear cases, a reaction should only be considered if communication time severely limits the success chance of that unit (or communication is unreliable).

There are some cases where using centralised control is more useful. Broad data processing for multiple units (say, ten thousand) would be much better done by sending the data to a few central units which themselves would be appropriately far enough away from combat that they don't have to react to things (thus being able to use that processing power for command). Assigning objectives (especially common ones) would also be better done further up the command chain rather than from the perspective of any single bot (or a cloud of).

Programs are relatively small, so each droid could have a potential "commander program", waiting in the wings for when the army is decapitated. The German Army found that when they trained all of their soldiers to be able to assume the rank immediately above them, the army itself was a lot more resilient to the kinds of shocks that an army takes over a war. Thus, you could easily choose a new commander.

You'd probably want to choose a commander (or set of commanders) far away from the front lines, but still has stable connections to a large number of troops.

The commander's role would probably be identifying likely routes for enemy supply based on current enemy positions (both likely and known) and terrain, and then identifying the most likely way to attack those supply routes. Frontline units themselves should not be concerned with that, because in a tactical situation supply routes are not quite as important (not entirely unimportant, but not as definitively advantageous). Or identifying its own routes of supply, what would be the most likely point of attack based on some disposition of the enemy, and then protecting those.

The commander would also have to do all of the logistical management that often comes in armies. This sort of thing would be relatively cheap, as it is the sort of task computers are far better at than people.

The commander would also have to collate data from all the front line units to determine likely strengths of unknown enemy formations (often reserves). It would use a combination of equations, learning, and traditional military doctrines to attempt to determine this (specifically how is for smarter people than me).

These are all things which work much more efficiently with a centralised command structure, though there are ways of them being accomplished clouded. However, centralising makes the most use of the resources in those cases, and nothing precludes using less active computers for processing things in a cloud like manner for particular task.

Anyway, just some more thoughts.


Anti-trust and the Illusion of Democracy

I'm posting this here because I don't really have anywhere else to, and this seems more permanent (and accessible) than a facebook post.

Communism is an interesting case of idealism failing the in the face of reality. I think it failed on something more basic than economic policy, however. Rather, it failed due to the way it concentrated power. While money or the means of production were spread out among the population (the specifics of how varied between regimes), they were replaced by personal or political clout. Power gave way to more power, and the ruthless were allowed to collude. Everyone else was forced to, in some way or another.

Obviously, communism and its method of employment (how communism was employed, rather) are separate issues. During the early 20th century, there was a decent amount of writing, thought, and old fashioned thuggery on what the best mode of government was.

This got me thinking about democracy and why people vote. Or why they should vote, and in particular for one candidate or another.

In the past, I was a policy voter. I had a few things I cared about and would vote for the party that promised them. While I still care about those issues to some extent, there are a lot of reasons why one policy promise would be rejected. It was a rather lazy way of thinking, especially using certain policies or lack of to be cynical and disenchanted with politics, and in the modern world there is no greater sin than laziness.

But what about ideology? This seems like the most natural way to vote, especially given the current rhetoric. My own experiences talking to STEM majors shows a very libertarian slant, but academia for the most part is very leftist (in varying ways). But I do think that it is just that. Rhetoric. Take the United States Democratic and Republican parties. While they play off one another, they agree on a lot of issues. Ultimately, the democrats pay a lot of lip service to welfare and equality, and the republicans pay a lot of lip service to small government and freedom. If you have any awareness about what either of the parties do, you can kind of see why voting based on ideology (at least according to what ideology a party or politician might say they are part of) is kind of dumb.

The problems with the USA voting system contribute to their problems, but that's already been discussed to death elsewhere.

At least one weakness of voting based on ideology is that such voting leads to the idea that ideology is preferable to pragmatism. I think most people who follow an ideology would argue that their ideology is pragmatic, and at the very least, would hope that a politician would vote the pragmatic way rather than the ideological way if they were opposed. Were politicians not considered slightly less trustworthy than used-car salesmen, this may be ideal. And even in the ideal case, it would cheapen the ideology, and the rhetoric that comes with it.

Ultimately, choosing to vote for one candidate over another is a problem of trust. You would hope that the politician you voted for not only watched out for you (yes, you, citizen), but also watched out for the country, without selling either out for a buck. Obviously, not very voter necessarily has the time or inclination to go through every single policy his country might enact, along with working and doing whatever else a human might do, so you'd want someone to vote in such a way. Politicians are necessary, but they don't have to be necessarily evil or self-serving.

Neither would every voter know a candidate personally enough to be able to trust them well enough as one of their friends. This is practically impossible (in the more modest sense of the term).

Really, trust should go up and down, but not horizontally. What do I mean by this? In short, you should be able to trust those above you to make the right decision, or at least make the decision that positively or neutrally affects you and your community. However, you shouldn't be able to trust your peers, at least as you hold more power and responsibility.

Imagine a community club. Alex is considered the best person for the job by everybody, and is voted as president. She has de jure power, and actual responsibility (which she has accepted). Burt is very popular or very rich or has some other form of de facto power, and very little responsibility. All the other members are below both of these two for the purposes of this example. Alex knows what the best thing to do for the club is. However, sometimes Burt uses his money or power to get some personal benefit from the club, against the better judgement of Alex (though she might herself gain from it). Both Alex and Burt have the same amount of power, but because Burt can see what Alex is doing (due to public whatever), Burt can use his power to influence Alex.

Burt can trust Alex to bend to his will as Alex also stands a lot to lose. Depending on the form of his power, he can influence votes and get Alex voted out, which means that even if Alex wants to do the right thing and doesn't care about personal gain, she must play the game. This is obviously unfair to Cassidy, Desmond, and Ellie, who eventually leave the club to do other things after being blown off one too many times.

(Not entirely a good analogy, oh well)

Now, imagine instead of five members, there were a thousand. But still just Alex and Burt. 998 members would be benefited by Alex having power were she able to use her own conscience, rationality, and ethical compass. But Burt corrupts this. Fuck Burt.

In a representative democracy, "Alex" has been split up into a number of people, which means that potential Burts need to use a lot more resources to corrupt the system in a reliable fashion. Hiding the Alex votes makes it even harder for Burt because they cannot guarantee a return on their transaction (or even know whether a return exists). The USA currently has an issue with a senator's voting record being made public, which strengthens partisan politics and their somewhat notorious amount of corporate kickbacks for their campaigns (see: voter fraud, vote receipts etc.).

Indeed, the Republican Party (and Democrat Party) could both be seen to be Burts. Partisan politics is ultimately good for the politician's career and the party, but neutral or detrimental to the politician's constituency. As the politicians can see who their friends are voting for, they may vote against a bill they agree with, simply to retain good standing in their party.

The Founding Fathers seemed to have some inkling that this was the case, writing in a separation of Church and State. This is, perhaps, a special case (as in, a specific case). At the time, the Church was a major social and economic power (well, churches of varying sorts were). Even now, the Republic Party uses a significant amount of religiously driven rhetoric to drive conservative voters to the polling booths; a direct effect of Fusionism in the 80s. But the Founding Fathers didn't generalise to mention other major power blocs.

For relatively small organisations, being able to see how your friends are voting isn't truly corrosive, at least not on the industrial scale that a large democracy might achieve. But as your power grows, as must your responsibility, voting based on your friends or personal gain becomes more and more damaging.

Really, there should be laws that prevent large organisations (in terms of power or population) from trusting any politician too much, or at the very least, not being able to trust them more than joe random could trust them. Certainly not to the tune of several million dollars worth of trust.

Similarly, media should keep itself relatively separate from politics. Fair and Balanced might be the motto of Fox News, but they are (obviously) far from that. The dramatising and sectionalising of media regarding particular parties or politicians results in people themselves diverging on their opinions, as viewers of Fox News will tend more towards the Republican Party, and the "viewers" of Tumblr occupying and going more extreme on a different part of the political spectrum.

People should be able to get information about politicians, politics, and policy, but such coverage should be neutered or at least be very neutral. How that would be enforced, I have no idea. I'm just frustrated at Rupert Murdoch.

Obviously, Australia is not the USA, and our problems are often imported (the Liberal Party attempting to mimic the Republican Party), but an increase in anti-trust laws between politicians and other major powerful persons or organisations might reduce the amount of corruption and self-serving politicians, even within parties.

We can recognise the amount of harm caused by collusion in the marketplace. Why not in politics?