?

Log in

No account? Create an account
Pete Zaitcev's Journal [entries|friends|calendar]
Pete Zaitcev

[ userinfo | livejournal userinfo ]
[ calendar | livejournal calendar ]

Suddenly RISC-V [28 Feb 2019|01:51pm]

I knew about that thing because Rich Jones was a fan. Man, that guy is always ahead of the curve.

Coincidentially, a couple of days ago Amazon announced support for RISC-V in FreeRTOS (I have no idea how free that thig is. It's MIT license, but with Amazon, it might be patented up the gills.).

[link] 2 comments|post comment

Mu accounts [28 Feb 2019|11:06am]

Okay, here's the breakdown:

@pro: Programming, computers, networking, maybe some technical fields. It's basically migrated from SeaLion and is the main account of interest for the readers of this journal.

@stuff: Pictures of butterflies, gardening, and general banality.

@gat: Boomsticks.

@avia: Flying.

@union: Politics.

@anime: Anime, manga, and weaboo. Note that Ani-nouto is still officially at Smug.

Thinking about adding @cars and @space, if needed.

You can subscribe from any Fediverse instance, just hit the "Remote follow" button.

[link] post comment

Multi-petabyte Swift cluster [28 Feb 2019|10:44am]

In a Swift numbers post in 2017, I mentioned that the largest known cluster had about 20 PB. It is 2019 now and I just got a word that TurkCell is operating a cluster with 36 PB, and they are looking at growing it up to 50 PB by the end of the year. The information about its make-up is proprietary, unfortunately. The cluster was started in Icehouse release, so I'm sure there was a lot of churn and legacy, like 250 GB drives and RHEL 6.

[link] post comment

Elixir of your every fear [24 Feb 2019|07:33pm]

TFW you consider an O'Reily animal-cover book and the blurb says:

Authors Simon St. Laurent and J. David Eisenberg show you how Elixir combines the robust functional programming of Erlang with an approach that looks more like Ruby, and includes powerful macro features for metaprogramming.

[link] post comment

Mu! [22 Feb 2019|03:14pm]

In the past several days, I innaugurated a private Fediverse instance, "Mu", running Pleroma for now. Although Mastodon is the dominant implementation, Pleroma is far easier to install, and uses less memory on small, private instances. By doing this, I'm bucking the trend of people hating to run their own infrastructure. Well, I do run my own e-mail service, so, what the heck, might as well join the Fediverse.

So far, it was pretty fun, but Pleroma has problem spots. For example, Pleroma has a concept of "local accounts" and "remote accounts": local ones are normal, into which users log in at the instance, and remote ones mirror accounts on other instances. This way, if users Alice@Mu and Bob@Mu follow user zaitcev@SLC, Mu creates a "remote" account UnIqUeStRiNg@Mu, which tracks zaitcev@SLC, so Alice and Bob subscribe to it locally. This permits to send zaitcev's updates over the network only once. Makes sense, right? Well... I have a "stuck" remote account now at Mu, let's call it Xprime@Mu and posit that it follows X@SPC. Updates posted by X@SPC are reflected in Xprime@Mu, but if Alice@Mu tries to follow X@SPC, she does not see updates that Xprime@Mu receives (the updates are not reflected in Alice's friends/main timeline) [1]. I asked at #pleroma about it, but all they could suggest was to try and resubscribe. I think I need to unsubscribe and purge Xprime@Mu somehow. Then, when Alice resubscribes, Pleroma will re-create a remote, say Xbis@Mu, and things hopefully ought to work. Well, maybe. I need to examine the source to be sure.

Unfortnately, aside from being somewhat complex by its nature, Pleroma is written in Elixir, which is to Erlang what Kotlin is to Java, I gather. Lain explains it thus:

As I had written a social network in Ruby for my work at around that time, I wanted to apply my [negative] experience to a new project. [...] This was also to get some experience with Elixir and the Erlang ecosystem, which seemed like a great fit for a fediverse server — and I think it is.

and to re-iterate:

When I started writing Pleroma I was already writing a social network in Ruby for my day job. Because of that, I knew a lot about the pain points of doing it with Ruby, mostly the bad performance for anything involving concurrency. I had written a Bittorrent DHT client in Elixir, so I knew that it would work well for this kind of software. I was also happy to work with functional programming again, which I like very much.

Anyway, it's all water under the bridge, and if I want to understand why Xprime@Mu is stuck, I need to learn Elixir. Early signs are not that good. Right away, it uses its own control entity that replaces make(1), packaging, and a few other things, called "mix". Sasuga desu, as they say in my weeb neighbourhood. Every goddamn language does that nowadays.


[1] It's trickier, actually. For an inexplicable reason, Alice sees some updates by X: for example, re-posts.

[link] post comment

Feynman on discussions among great men [11 Feb 2019|01:52pm]

One of the first experiences I had in this project at Princeton was meeting great men. I had never met very many great men before. But there was an evaluation committee that had to try to help us along, and help us ultimately decide which way we were going to separate the uranium. This committee had men like Compton and Tolman and Smyth and Urey and Rabi and Oppenheimer on it. I would sit in because I understood the theory of how our process of separating isotopes worked, and so they'd ask me questions and talk about it. In these discussions one man would make a point. Then Compton, for example, would explain a different point of view. He would say it should be this way, and was perfectly right. Another guy would say, well, maybe, but there's this other possibility we have to consider against it.

So everybody is disagreeing, all around the table. I am surprised and disturbed that Compton doesn't repeat and emphasize his point. Finally, at the end, Tolman, who's the chairman, would say, ``Well, having heard all these arguments, I guess it's true that Compton's argument is the best of all, and now we have to go ahead.''

It was such a shock to me to see that a committee of men could present a whole lot of ideas, each one thinking of a new facet, while remembering what the other fella said, so that, at the end, the discussion is made as to which idea was the best — summing it all up — without having to say it three times. These were very great men indeed.

Life on l-k before CoC.

[link] 1 comment|post comment

SpaceBelt whitepaper [06 Feb 2019|02:50pm]

I pay a special attention to my hometown rocket enterprise, Firefly. So, it didn't escape my notice when Dr. Tom Markusic mentioned SunBelt in the SatMagazine as a potential user of launch services:

Cloud Constellation Corporation capped off 2018 funding announcements with a $100 million capital raise for space-based data centers [...]

Not a large amount of funding, but nonetheless, what are they trying to do? The official answer is provided in the whitepaper on their website.

The orbiting belt provides a greater level of security, independence from jurisdictional control, and eliminating the need for terrestrial hops for a truly worldwide network. Access to the global network is via direct satellite links, providing for a level of flexibility and security unheard of in terrestrial networks.

SpaceBelt provides a solution – a space-based storage layer for highly sensitive data providing isolation from conventional internet networks, extreme physical isolation, and sovereign independent data storage resources.

Although not pictured in the illustrations, text permits users direct access, which will become important later:

Clients can purchase or lease specialized very-small-aperture terminals (VSATs) which have customized SpaceBelt transceivers allowing highly-secure access to the network.

Interesting. But a few thoughts spring to mind.

Isolation from the Internet is vulnerable to the usual gateway problem, unintentional or malicious. If only application-level access is provided, a compromised gateway only accesses its own account. So that's fine. However, if state security services were able to insert malware into Iran's nuclear facilities, I think that the isolation may not be as impregnable as purported.

Consider also that system control has to be provided somehow, so they must have a control facility. In terms of vulnerabilities to governments and physical attacks, it is an equivalent of a datacenter hosting the intercontinental cluster's control plane, located at the point where master ground station is. In case of SpaceBelt, it is identified as "Network Management Center".

In addition, the space location invites a new spectrum of physical attacks: now the adversary can cook your data with microwaves or lasers, instead of launching ICBMs. It's a significantly lower barrier to the entry.

Turning around, it might be cheaper to store the data where the NMC is, since the physical security measures are the same, but vulnerabilities are smaller.

Of course the physical security includes a legal aspect. The whitepaper nods to "jurisdictional independence" several times. They don't explain what they mean, but they may be trying to imply that the data sent from the ground to the SpaceBelt does not traverse the ground infrastructure where NMC is located, and therefore is not a subject to any legal restrictions there, such as GDPR.

Very nice, and IANAL, but doesn't Outer Space Treaty establishes a regime of the absolute responsibility of signatory nations? I only know that OST is quite unlike the Law of The Sea: because of the absolute responsibility there is no salvage. Therefore, a case can be made, if the responsible nation is under GDPR, the whole SunBelt is too.

The above considerations apply to the "sovereign" or national data, but the international business faces more. The whitepaper implies that accessing data may be a simple matter of "leasing VSATs", but the governments still have the powers to deny this access. Usually the radio frequency licensing is involved, such as the case of OneWeb in Russia. The whitepaper mentions using traditional GSO comsats as relays, thus shifting the radio spectrum licensing hurdles onto the comsat operators. But there may be outright bans as well. I'm sure the Communist government of mainland China will not be happy if SunBelt users start downloading Falun Gong literature from space.

One other thing. If frying SpaceBelt with lasers might be too hard, there are other ways. Russia, for example, is experimenting with a rogue satellite that approaches comsats. It's not doing anything too bad to them at present, but so much for the "extreme physical isolation". If you thought that using SunBelt VSAT will isolate you from the risk of Russian submarines tapping undersea cables, then you might want to reconsider.

Overall, it's not like I would not love to work at Cloud Constellation Corporation, implementing the basic technologies their project needs. Sooner or later, humanity will have computing in space, might as well do it now. But their pitch needs work.

Finally, for your amusement:

In the future, the SpaceBelt system will be enabled to host docker containers allowing for on-orbit data processing in-situ with data storage.

Congratulations, Docker. You've became the xerox of cloud. (In the U.S., Xerox was ultimately successful is fighting the dillution: everyone now uses the word "photocopy". Not that litigation helped them to remain relevant.)

[link] 5 comments|post comment

Reinventing a radio wheel [05 Jan 2019|09:47pm]

I tinker with software radio as a hobby and I am stuck solving a very basic problem. But first, a background exposition.

Bdale, what have you done to me

Many years ago, I attended an introductory lecture on software radio at a Linux conference we used to have - maybe OLS, maybe LCA, maybe ALS/Usenix even. Bdale Garbee was presenting, who I mostly knew as a Debian guy. He outlined a vision of Software Defined Radio: take what used to be a hardware problem, re-frame it as a software problem, let hackers hack on it.

Back then, people literally had sound cards as receiver back-ends, so all Bdale and his cohorts could do was HF, narrow band signals. Still, the idea seemed very powerful to me and caught my imagination.

A few years ago, the RTL-SDR appeared. I wanted to play with it, but nothing worthy came to mind, until I started flying and thus looking into various aviation data link signals, in particular ADS-B and its relatives TIS and FIS.

Silly government, were feet and miles not enough for you

At the time FAA became serious about ADS-B, two data link standards were available: Extended Squitter aka 1090ES at 1090 MHz and Universal Access Transciever aka UAT at 978 MHz. The rest of the world was converging quickly onto 1090ES, while UAT had a much higher data rate, so permitted e.g. transmission of weather information. FAA sat like a Buridan's ass in front of two heaps of hay, and decided to adopt both 1090ES and UAT.

Now, if airplane A is equipped with 1090ES and airplane B is equipped with UAT, they can't communicate. No problem, said FAA, we'll install thousands of ground stations that re-transmit the signals between bands. Also, we'll transmit weather images and data on UAT. Result is, UAT has a lot of signals all the time, which I can receive.

Before I invent a wheel, I invent an airplane

Well, I could, if I had a receiver that could decode a 1 megabit/second signal. Unfortunately, RTL-SDR could only snap 2.8 million I/Q samples/second in theory. In practice, even less. So, I ordered an expensive receiver called AirSpy, which was told to capture 20 million samples/second.

But, I was too impatient to wait for my AirSpy, so I started thinking if I could somehow receive UAT with RTL-SDR, and I came up with a solution. I let it clock at twice of the exact speed of UAT, a little more than 1 mbit/s. Then, since UAT used PSK2 encoding, I would compare phase angles between samples. Now, you cannot know for sure where the bits fall over your samples. But you can look at decoded bits and see if it's garbage or a packet. Voila, making impossible possible, at Shannon's boundary.

When I posted my code to github, it turned out that a British gentleman by the handle of mutability was thinking about the same thing. He contributed a patch or two, but he also had his own codebase, at which I hacked a bit too. His code was performing better, and it found a wide adoption under the name dump978.

Meanwhile, the AirSpy problem

AirSpy ended collecting dust, until now. I started playing with it recently, and used the 1090ES signal for tests. It was supposed to be easy... Unlike the phase shift of UAT, 1090ES is much simpler signal: raising front is 1, falling front is 0, stable is invalid and is used in the preamble. How hard can it be, right? Even when I found that AirSpy only receives the real component, it seemed immaterial: 1090ES is not phase-encoded.

But boy, was I wrong. To begin with, I need to hunt a preamble, which synchronizes the clocks for the remainder of the packet. Here's what it looks like:

The fat green square line on the top is a sample that I stole from our German friends. The thin green line is a 3-sample average of abs(sample). And the purple is raw samples off the AirSpy, real-only.

My first idea was to compute a "discriminant" function, or a kind of an integrated difference between the ideal function (in fat green) and the actual signal. If the discriminant is smaller than a threshold, we have our preamble. The idea was a miserable failure. The problem is, the signal is noisy. So, even when the signal is normalized, the noise in more powerful signal inflates the discriminant enough that it becomes larger than the discriminant of background noise.

Mind, this is a long-solved problem. Software receiver for 1090ES with AirSpy exists. I'm just playing here. Still... How do real engineers do it?

[link] 2 comments|post comment

The New World [21 Dec 2018|10:41pm]

well I had to write a sysv init script today and I wished it was systemd

— moonman, 21 December 2018

[link] post comment

And to round out the 2018 [20 Dec 2018|03:30pm]

To quoth:

Why not walk down the wider path, using GNU/Linux as DOM0? Well, if you like the kernel Linux, by all means, do that! I prefer an well-engineered kernel, so I choose NetBSD. [...]

Unfortunately, NetBSD's installer now fails on many PCs from 2010 and later. [...]

Update 2018-03-11: I have given up on NetBSD/Xen and now use Gentoo GNU/Linux/Xen instead. The reason is that I ran into stability problems which survived many NetBSD updates.

You have to have a heart of stone not to laugh out loud.

P.S. Use KVM already, sheesh.

P.P.S. This fate also awaits people who don't like SystemD.

[link] 1 comment|post comment

Firefox 64 autoplay in Fedora 29 [18 Dec 2018|11:53am]

With one of the recent Firefox releases (current version is 64), autoplay videos began to play again, although they start muted now [1]. None of the previously-working methods work (e.g. about:config media.autoplay.enabled), the documented preference is not there in 64 (promised for 63: either never happened, or was removed). Extensions that purport to disable autoplay do not work.

The solution that does work is set media.autoplay.default to 1.

Finding the working option required a bit of effort. I'm sure this post will become obsolete in a few months, and add to the Internet noise that makes it harder to find a working solution when Mozilla changes something again. But hey. Everyting is shit, so whatever.

[1] Savour the bitterness of realization that an employee of Mozilla thought that autoplay was okay to permit as long as it was muted.

[link] 2 comments|post comment

IBM PC XT [13 Dec 2018|12:07am]

By whatever chance, I visited an old science laboratory where I played at times when I was a teenager. They still have a pile of old equipment, including the IBM PC XT clone that I tinkered with.

Back in the day, they also had a PDP-11, already old, which had a magnetic tape unit. They also had data sets on those tapes. The PC XT was a new hotness, and they wanted to use it for data visualization. It was a difficult task to find a place that could read the data off the tape and write to 5.25" floppies. Impossible, really.

I stepped in and went to connect the two over RS-232. I threw together a program in Turbo Pascal, which did the job of shuffling the characters between the MS-DOS and the mini, thus allowing to log in and initiate a transfer of the data. I don't remember if we used an ancient Kermit, or just printed the numbers in FORTRAN, then captured them on the PC.

The PDP-11 didn't survive for me to take a picture, but the PC XT did.

[link] 7 comments|post comment

Twitter [01 Dec 2018|10:22pm]

First things first: I am sorry for getting passive-aggressive on Twitter, although I was mad and the medium encourages this sort of thing. But this is the world we live in: the way to deal with computers is to google the symptoms, and hope that you don't have to watch a video. Something about this world disagrees with me so much, that I almost boycott Wikipedia and Stackoverflow. "Almost" means that I go very far, even Read The Fine Manuals, before I resort to them. As the path in tweet indicated, I built Ceph from source in order to debug the problem. But as the software stacks get thicker and thicker, source gets less and less useful, or at least it loses competition to googling for symptoms. My only hope at this point is for the merciful death take me away before these trends destroy the human civilization.

[link] post comment

Where is Amazon? [29 Oct 2018|10:26pm]

Imagine, purely hypothetically, that you were a kernel hacker working for Red Hat and for whatever reason you wanted to find a new challenge at a company with a strong committment to open source. What are the possibilities?

To begin with, as the statistics from the Linux Foundation's 2016 report demonstrate, you have to be stark raving mad to leave Red Hat. If you do, Intel and AMD look interesting (hello, Alan Cox). IBM is not bad, although since yesterday, you don't need to quit Red Hat to work for IBM anymore. Even Google, famous for being a black hole that swallows good hackers who are never heard from again, manages to put up a decent showing, Fuchsia or no. Facebook looks unimpressive (no disrespect to DaveJ intended).

Now, the no-shows. Both of them hail from Seattle, WA: Microsoft and Amazon. Microsoft made an interesting effort to adopt Linux into its public cloud, but their strategy was to make Red Hat do all the work. Well, as expected. Amazon, though, is a problem. I managed to get into an argument with David "dwmw2" Woodhouse on Facebook about it, where I brought up a somewhat dated article at The Register. The central claim is, the lack of Amazon's contribution is the result of the policy rolled all the way from the top.

(...) as far as El Reg can tell, the internet titan has submitted patches and other improvements to very few projects. When it does contribute, it does so typically via a third party, usually an employee's personal account that is not explicitly linked to Amazon.

I don't know if this culture can be changed quickly, even if Bezos suddenly changes his mind.

[link] 1 comment|post comment

I'd like to interject for a moment [11 Oct 2018|08:16am]

In a comment on the death of G+, elisteran brought up something that long annoyed me out of all proportion with its actual significance. What do you call a collection of servers communicating through NNTP? You don't call them "INN", you call them "Usenet". The system of hosts communicating through SMTP is not called "Exim", it is called "e-mail". But when someone wants to escape G+, they often consider "Mastodon". Isn't it odd?

Mastodon is merely an implementation of Fediverse. As it happens, only one of my Fediverse channels runs on Mastodon (the Japanese language one at Pawoo). Main one still uses Gnusocial, the anime one was on Gnusocial and migrated to Pleroma a few months ago. All of them are communicating using the OStatus protocol, although a movement is afoot to switch to ActivityPub. Hopefully it's more successful than the migration from RSS to Atom was.

Yet, I noticed that a lot of people fall to the idea that Mastodon is an exclusive brand. Rarely one has to know or care what MTA someone else uses. Microsoft was somewhat successful in establishing Outlook as such a powerful brand to the exclusion of the compatible e-mail software. The maintainer of Mastodon is doing his hardest to present it as a similar brand, and regrettably, he's very successful at that.

I guess what really drives me mad about this is how Eugen uses his mindshare advanage to drive protocol extensions. All of Fediverse implementations generaly communicate freely with one another, but as Pleroma and Mastodon develop, they gradually leave Gnusocial behind in features. In particular, Eugen found a loophole in the protocol, which allows to attach pictures without using up the space in the message for the URL. When Gnusocial displays a message with attachment, it only displays the text, not the picture. This acutally used to be a server setting, in case you want to safe your instance from NSFW imagery and save a little bandwidth. But these days pictures are so prevalent, that it's pretty much impossible to live without receiving them. In this, Eugen has completed the "extend" phase and is moving onto "extinguish".

I'm not sure if this a lost cause by now. At least I hope that members of my social circle migrate to Fediverse in general, and not to Mastodon from the outset. Of course, the implementation does matter when they make choices. As I mentioned, for anything but Linux discussions, pictures are essential, so one cannot reasonably use a Gnusocial instance for anime, for example. And, I can see some users liking Mastodon's UI. And, Mastodon's native app support is better (or not). So yes, by all means, if you want to install Mastodon, or join an instance that's running Mastodon, be my guest. Just realize that Mastodon is an implementation of Fediverse and not the Fediverse itself.

UPDATE 2019/02/11: Chris finds a silver lining.

[link] post comment

Ding-dong, the witch is dead [09 Oct 2018|09:46am]

Reactions by G+ inhabitants were better than expected at times. Here's Jon Masters:

For the three people who care about G+: it's closing down. This is actually a good thing. If you work in kernel or other nerdy computery circles and this is your social media platform, I have news for you...there's a world outside where actual other people exist. Try it. You can then follow me on Twitter at @jonmasters when you get bored.

Rock on. Although LJ was designed as a shitty silo, it wasn't powerful enough to make itself useless. For example, outgoing links aren't limited. That said, LJ isn't bulletproof: the management is pushing the "new" editor that does not allow HTML. The point is though, there's a real world out there.

And, some people are afraid of it, and aren't ashamed to admit it. Here's Steven Rostedt in Jon's comments:

In other words, we are very aware of the world outside of here. This is where we avoided that world ;-)

So weak. Jon is titan among his entourage.

Kir enumerated escape plans thus (in my translation):

Where to run, unclear. Not wanting to Facebook, Telegram is kinda a marginal platform (although Google+ marginal too), too lazy to stand up a standalone. Nothing but LJ comes to mind.

One thing that comes across very strongly is how reluctant people are to run their own infrastructure. For one thing, the danger of a devastating DDoS is absolutely real. And then you have to deal with spam. Those who do not have the experience also tend to over-estimate the amount of effort you have to put into running "dnf update" once in a while.

Personally, I think that although of course it's annoying, the time wasted on the infra is not that great, or at least it wasn't for me. The spam can be kept under control with a minimal effort. Or, could be addressed in drastic ways. For example, my anime blog simply does not have comments at all. As far as DoS goes, yes, it's a lottery. But then the silo platform can easily die (like G+), or ban you. This actually happens a lot more than those hiding their heads in the sand like to admit. And you don't need to go as far as to admit to your support of President Trump in order to get banned. Anything can trigger it, and the same crazies that DoS you will also try to deplatform you.

One other idea I was very successful with, and many people have trouble accepting, is having several channels for social posting (obviously CKS was ahead of the time with separating pro and hobby). Lots and lots of G+ posters insist on dumping all the garbage into one bin, instead of separating the output. Perhaps now they'll find a client or device that allows them switch accounts easily.

[link] 5 comments|post comment

Python and journalism [07 Oct 2018|03:04pm]

Back in July, Economist wanted to judge a popularity of programming languages and used ... Google Trends. Python is rocketing up, BTW. Go is not even mentioned.

[link] post comment

Postres vs MySQL [25 Sep 2018|08:15pm]

Unexpectedly in the fediverse:

[...] in my experience, postgres crashes less, and the results are less devastating if it does crash. I've had a mysql crash make the data unrecoverable. On the other hand I have a production postgres 8.1 installation (don't ask) that has been running without problems for over 10 years.

There is more community information and more third-party tools that require mysql, it has that advantage. the client tools for mysql are easier to use because the commands are in plain english ("show tables") unlike postgres that are commands like "\dt+". but if I'm doing my own thing though, I use postgres.

Reiser, move over. There's a new game in town.

[link] post comment

Huawei UI/UX fail [24 Sep 2018|08:20pm]

The Huawei M3 gave me an unpleasant surprise a short time ago. I had it in my hands while doing something and my daughter (age 30) offered to hold it for me. When I received it back and turned it on, it was factory reset. What happened?

It turned out that it's possible to reset the blasted thing merely by holding it. If someone grabs it and pays no attention to what's on the screen, then it's easy to press and hold the edge power button inadvertently. That brings up a dialog that has 2 touch buttons for power off and reset. The same hand that's holding the power touches the screen and causes the reset (a knuckle where the finger meets the palm does that perfectly).

The following combination of factors makes this happen. 1. Power button is on the edge, and it sticks out. Some tablets like Kindle or Nexus have edges somewhat slanted, so buttons are somewhat protected. Holding the tablet across the face engages the power button. They could at least place the power button on the short edge of the device. 2. The size of the tablet is just large enough that a normal person can hold it with one hand, but has to stretch. Therefore, the base knuckles touch the surface. On a larger tablet, a human hand is not large enough to hold it like that, and on a phone sized device the palm cups, so it does not touch the center of the screen. 3. The protection against accidental reset is essentially absent.

Huawei, not even once.

Google, bring back the Nexus 7, please.

[link] 1 comment|post comment

Robots on TV [17 Sep 2018|02:35pm]

Usually I do not watch TV, but I traveled and saw a few of them in public food intake places and such. What caught my attention were ads for robotics companies, aimed at business customers. IIRC, the companies were called generic names like "Universal Robotics" and "Reach Robotics". Or so I recall, but on second thought, Reach Robotics is a thing, but it focises on gaming, not traditional robotics. But the ads depicted robots doing some unspecified stuff: moving objects from place to place. Not dueling bots. Anyway, what's up with this? Is there some sort of revolution going on? What was the enabler? Don't tell me, it's all the money released by end of the Moore's Law, seeking random fields of application.

P.S. I know about the "Pentagon's Evil Mechanical Dogs" by Boston Dynamics. These were different, manipulating objects in the environment.

[link] 1 comment|post comment

navigation
[ viewing | most recent entries ]
[ go | earlier ]