Pete Zaitcev's Journal [entries|friends|calendar]
Pete Zaitcev

[ userinfo | livejournal userinfo ]
[ calendar | livejournal calendar ]

Next stop, SMT [26 May 2014|06:50pm]

The little hardware project, mentioned previously, continues to chug along. A prototype is now happily blinking LEDs on a veroboard:

zag-avr-2.photo1

Now it's time to use a real technology, so the thing could be used onboard a car or airplane. Since the heart of the design is a chip in LGA-14 package, we are looking at soldering a surface-mounted element with 0.35 mm pitch.

I reached to members of a local hackerspace for advice, and they suggested to forget what people post to the Internet about wave-soldering in a oven. Instead, buy a cheap 10x microscope, fine tip for my iron, and some flux-solder paste, then do it all by hand. As Yoda said, there is only do and do not, there is no try.

[link] 2 comments|post comment

Digital life in 2014 [24 May 2014|09:58pm]

I do not get out often and my clamshell cellphone incurs a bill of $3/month. So, imagine my surprise when at a dinner with colleagues everyone pulled out a charger brick and plugged their smartphone into it. Or, actually, one guy did not have a brick with him, so someone else let him tap into a 2-port brick (pictured above). The same ritual repeated at every dinner!

I can offer a couple of explanations. One is that it is much cheaper to buy a smartphone with a poor battery life and a brick than to buy a cellphone with a decent battery life (decent being >= 1 day, so you could charge it overnight). Or, that market forces are aligned in such a way that there are no smartphones on the market that can last for a whole day (anymore).

UPDATE: My wife is a smartphone user and explained it to me. Apparently, sooner or later everyone hits an app which is an absolute energy hog for no good reason, and is unwilling to discard it (in her case it was Pomodoro, but it could be anything). Once that happens, no technology exists to pack enough battery into a smartphone. Or so the story goes. In other words, they buy a smartphone hoping that they will not need the brick, but inevitably they do.

[link] 4 comments|post comment

OpenStack Atlanta 2014 [23 May 2014|12:38pm]

The best part, I think, came during the Swift Ops session, when our PTL, John Dickinson, asked for a show of hands. For example -- without revealing any specific proprietary information -- how many people run clusters of less than 10 nodes, 10 to 100, 100 to 1000, etc. He also asked how old the production clusters were, and if anyone deployed Swift in 2014. Most clusters were older, and only 1 man raised his hand. John asked him what were the difficulties setting it up, and the man said: "we had some, but then we paid Red Hat to fix it up for us, and they did, so it works okay now." I felt so useful!

The hardware show was pretty bare, compared to Portland, where Dell and OCP brought out cool things.

HGST showed their Ethernet drive, but they used a chassis so dire, I don't even want to post a picture. OCP did the same this year: they brought a German partner who demonstrated a storage box that looked built in a basement in Kherson, Ukraine, while under siege by Kiev forces.

Here's a poor pic of cute Mellanox Ethernet wares: a switch, NICs, and some kind of modern equivalent of GBICs for fiber.

Interestingly enough, although Mellanox displayed Ethernet only, I heard in other sessions that Infiniband was not entirely dead. Basically if you need to step beyond bonded 10GbE, there's nothing else for your overworked Swift proxies: it's Infiniband or nothing. My interlocutor from SwiftStack implied that a router existed into which you could plug your Infiniband pipe, I think.

[link] post comment

Seagate Kinetic and SMR [15 May 2014|08:35am]

In the trivial things I never noticed department: Kinetic might have a technical merit. Due to the long history of vendors selling high-margin snake oil, I was somewhat sceptical of the object-addressed storage. However, they had a session with Joe Arnold of SwiftStack and a corporate person from Seagate (with an excessively complicated title), where they mentioned off-hand that actually all this "intelligent" stuff is supposed to help with SMR. As everyone probably know, shingle drives implement complicated read-modify-write cycles to support traditional sector-addressed model and the performance penalty is worse than 4K drive with 512-byte interface.

I cannot help thinking that it would be even better to find a more natural model to expose the characteristics of the drive to the host. Perhaps some kind of "log-structured-drive". I bet there are going to be all sorts of overheads in the drive's filesystem that negate the increase in the aerial density. After all, shingles only give you about 2x. The long-term performance of any object-addressed drive is also in doubt as the fragmentation mounts.

BTW, a Seagate guy swore to me that Kinetic is not patent-encumbered and that they really want other drive vendors to jump on the bandwagon.

UPDATE: Jeff Darcy brought up HGST on Twitter. The former-Hitachi guys (owned by WD now) do something entirely different: they allow apps, such as Swift object server, to run directly on the drive. It's cute, but does nothing to help the block-addressing API being unsifficient to manage a shingled drive. When software runs on the drive, it still has to talk to the rest of the drive somehow, and HGST did not add a different API to the kernel. All it does is kicking the can down the road and hoping a solution comes along.

UPDATE: Wow even Sage.

[link] 1 comment|post comment

Mental note [08 May 2014|01:25pm]

I went through the architecture manual for Ceph and penciled down a few ideas that could be applied to Swift. The biggest one is that we could benefit from some kind of massive proxy or a PACO setup.

Unfortunately, I see problems with a large PACO. Memcached efficiency will nosedive, for one. But also, how are we going to make sure clients are spread right? There's no cluster map and thus clients can't know which proxy in PACO is closer to the location. In fact we deliberately prevent them from knowing too much. They don't even know cluster's partition size.

The reason this matters is that I feel that EC should increase requirements for CPU on proxies, which are CPU bound in most clusters already. Of course what I feel may not be what actually occurs, so maybe it does not matter.

[link] post comment

Get off my lawn [01 May 2014|12:19pm]

My only contact with Erasure Codes previously was in the field of data transmission (naturally). While a student at Lomonosov MSU, I worked in a company that developed a network for PCs, called "Micross" and led by Andrey Kinash, IIRC. Originally it ran on top of MS-DOS 3.30, and was ported to 4.01 and later versions over time.

Ethernet was absurdly expensive in the country back in the 1985, so the hardware used the built-in serial. A small box with a relay attached the PC to a ring, where a software token was circulated. Primary means of keeping the ring integrity was the relay, controlled by the DTR signal. In case that was not enough, the system implemented a double-ring isolation not unlike FDDI's.

The baud rate was 115.2 kbit/s, and that interfered with the MS-DOS. I should note that notionally the system permitted data transfer while PCs ran usual office applications. But even if PC was otherwise idle, a hit by the system timer would block out interrupts long enough to drop 2 or 3 characters. Solution was to implement a kind of erasure coding, which recovered not only corruption, but also a loss of data. The math and implementation in Modula 2 were done by Mr. Vladimir Roganov.

I remember that at the time, the ability to share directories over LAN without a dedicated server was most impressive to the customers, but I thought that the Roganov's EC was perhaps the most praiseworthy, from nerdiness perspective. Remember that all of that ran on a 12 MHz 80286.

[link] post comment

gEDA [26 Apr 2014|07:41pm]
zag-avr-1

For the next stage, I switched from Fritzing to gEDA and was much happier for it. Certainly, gEDA is old-fashioned. The UI of gschem is really weird, and even has problems dealing with click-to-focus. But it works.

I put the phase onto a Veroboard, as I'm not ready to deal with ordering PCBs yet.

[link] 2 comments|post comment

VPN versus DNS [28 Mar 2014|12:26pm]

For years, I did my best to ignore the problem, but CKS inspired me to blog the curious networking banality, in case anyone has wisdom to share.

The deal is simple: I have a laptop with a VPN client (I use vpnc). The client creates a tun0 interface and some RFC 1918 routes. My home RFC 1918 routes are more specific, so routing works great. The name service does not.

Obviously, if we trust DHCP-supplied nameserver, it has no work-internal names in it. The stock solution is to let vpnc to install /etc/resolv.conf pointing to work-internal nameservers. Unfortunately this does not work for me, because I have a home DNS zone, zaitcev.lan. Work-internal DNS does not know about that one.

Thus I would like some kind of solution that routes DNS requests somehow according to a configuration. Requests to work-internal namespaces (such as *.redhat.com) would go to nameservers delivered by vpnc (I think I can make it write something like /etc/vpnc/resolv.conf that does not conflict). Other requests go to the infrastructure name service, being it a hotel network or home network. Home network is capable of serving its own private authoritative zones and forwarding the rest. That's the ideal, so how to accomplish it?

I attempted apply a local dnsmasq, but could not figure out if it can do what I want and if yes, how.

For now, I have some scripting that caches work-internal hostnames in /etc/hosts. That works, somewhat. Still, I cannot imagine that nobody thought of this problem. Surely, thousands are on VPNs, and some of them have home networks. And... nobody? (I know that a few people just run VPN on the home infrastructure; that does not help my laptop, unfortunately).

UPDATE: Several people commented with interesting solutions. You can count on Mr. robbat2 to be on the bleeding edge and use unbound manually. I went with the NM magic as suggested by Mr. nullr0ute. In F20 it is required to edit /etc/NetworkManager/NetworkManager.conf and add "dns=dnsmasq" there. Then, NM runs dnsmasq with the following magic /var/run/NetworkManager/dnsmasq.conf:

server=/redhat.com/10.11.5.19
server=/10.in-addr.arpa/10.11.5.19
server=/redhat.com/10.5.30.160
server=/10.in-addr.arpa/10.5.30.160
server=192.168.128.1
server=fd2d:acfb:74cc:1::1

It is exactly the syntax Ewen tried to impart with his comment, but I'm too stupid to add 2 and 2 this way, so I have NM do it.

NM also starts vpnc in such a way that it does not need to damage any of my old hand-made config in /etc/vpnc, which is a nice touch.

See also: bz#842037.

See also: Chris using unbound.

[link] 8 comments|post comment

Okay, it's all broken. Now what? [11 Mar 2014|08:02pm]

A rant in ;login was making rounds recently (h/t @jgarzik), which I thought was not all that relevant... until I remembered that Swift Power Calculator has mysteriously stopped working for me. Its creator is powerless to do anything about it, and so am I.

So, it's relevant all right. We're in a big trouble even if Gmail kind of works most of the time. But the rant makes no recommendations, only observations. So it's quite unsatisfying.

BTW, it reminds me about a famous preso by Jeff Mogul, "What's wrong with HTTP and why it does not matter". Except Mogul's rant was more to the point. They don't make engineers like they used to, apparently. Also notably, I think, Mogul prompted development of RESTful improvements. But there's nothing we can do about excessive thickness of our stacks (that I can see). It's just spiralling out of control.

[link] 1 comment|post comment

AVR, Fritzing, Inkscape [09 Mar 2014|10:23pm]
zag-avr-0-sch

I suppose everyone has to pass through a hardware phase, and mine is now, for which I implemented a LED blinker with an AVRtiny2313. I don't think it even merits the usual blog laydown. Basically all it took was following tutorials to the letter.

For the initial project, I figured that learning gEDA would take too much, so I unleashed an inner hipster and used Fritzing. Hey, it allows to plan breadboards, so there. And well it was a learning experience and no mistake. Crashes, impossible to undo changes, UI elements outside of the screen, everything. Black magic everywhere: I could never figure out how to merge wires, dedicate a ground wire/plane, or edit labels (so all of them are incorrect in the schematic above). The biggest problem was the lack of library support together with an awful parts editor. Editing schematics in Inkscape was so painful, that I resigned to doing a piss-poor job, evident in all the crooked lines around the AVRtiny2313. I understand that Fritzing's main focus is iPad, but this is just at a level of typical outsourced Windows application.

Inkscape deserves a special mention due to the way Fritzing requires SVG files being in a particular format. If you load and edit some of those, the grouping defeats Inkscape features, so one cannot even select elements at times. And editing the raw XML cause weirdest effects, so it's not like LyX-on-TeX, edit and visualize. At least our flagship vector graphics package didn't crash.

The avr-gcc is awesome though. 100% turnkey: yum install and you're done. Same for avrdude. No huss, no fuss, everything works.

[link] 2 comments|post comment

Suddenly, Python Magic [06 Mar 2014|08:44pm]

Looking at a review by Solly today, I saw something deeply disturbing. A simplified version that I tested follows:


import unittest

class Context(object):
    def __init__(self):
        self.func = None
    def kill(self):
        self.func(31)

class TextGuruMeditationMock(object):

    # The .run() normally is implemented in the report.Text.
    def run(self):
        return "Guru Meditation Example"

    @classmethod
    def setup_autorun(cls, ctx, dump_with=None):
        ctx.func = lambda *args: cls.handle_signal(dump_with,
                                                   *args)

    @classmethod
    def handle_signal(cls, dump_func, *args):
        try:
            res = cls().run()
        except Exception:
            dump_func("Unable to run")
        else:
            dump_func(res)

class TestSomething(unittest.TestCase):

    def test_dump_with(self):
        ctx = Context()

        class Writr(object):
            def __init__(self):
                self.res = ''

            def go(self, out):
                self.res += out

        target = Writr()
        TextGuruMeditationMock.setup_autorun(ctx,
                                dump_with=target.go)
        ctx.kill()
        self.assertIn('Guru Meditation', target.res)

Okay, obviously we're setting a signal handler, which is a little lambda, which invokes the dump_with, which ... is a class method? How does it receive its self?!

I guess that the deep Python magic occurs in how the method target.go is prepared to become an argument. The only explanation I see is that Python creates some kind of activation record for this, which includes the instance (target) and the method, and that record is the object being passed down as dump_with. I knew that Python did it for scoped functions, where we have global dict, local dict, and all that good stuff. But this is different, isn't it? How does it even know that target.io belongs to target? In what part of Python spec is it described?

UPDATE: Commenters provided hints with the key idea being a "bound method" (a kind of user-defined method).

A user-defined method object combines a class, a class instance (or None) and any callable object (normally a user-defined function).

When a user-defined method object is created by retrieving a user-defined function object from a class, its im_self attribute is None and the method object is said to be unbound. When one is created by retrieving a user-defined function object from a class via one of its instances, its im_self attribute is the instance, and the method object is said to be bound.

Thanks, Josh et al.!

UPDATE: See also Chris' explanation and Peter Donis' comment re. unbound methods gone from py3.

[link] 3 comments|post comment

Why I can never pass Google interview, part 3 [31 Jan 2014|02:38pm]

Seen a hilarious blog post about Google interviews, which contains the following gem:

Code in C++ or Java, which shows maturity. If you can only code in Python or bash shell, you're going to have trouble.

(emphasis mine)

Reminds me immediately how Google paid 1.6 Billion dollars for a website coded entirely in Python.

Previously: FizzBuzz.

[link] post comment

OpenStack and a Core Developer [21 Jan 2014|03:14pm]

Real quick: you know how BSDs were supposed to have "core bit" for "core committers"? If one was "on core", he could issue "cvs commit". Everyone else had to e-mail patch to one of the core guys. One problem with that setup was that people are weak. A core guy could be severily tempted to commit something that was not rigorously tested or otherwise questionable.

OpenStack addresses this problem by not actually letting "core" people commit anything. I'm on core for Swift, but I cannot do "git push" to master. I can only do "git review" and then ask someone else to approve it. Of course, this is easy to attack if one wants so. For example, committers can enter into a conspiracy and buddy-approve. It's possible. But at least we're somewhat protected against a late night commit before a deadline by a guy who was hurried by problems at work.

[link] 6 comments|post comment

Glie, eventlet, and Python 3 [01 Jan 2014|01:05pm]

Speaking of the Python 3 debacle, I honestly meant to do Glie in py3, but I wanted a built-in webserver and used eventlet for it. But eventlet is Python 2.x only, isn't it? What's a decent embeddable mini webserver with WSGI interface for py3?

[link] 2 comments|post comment

Hello, Glie [30 Dec 2013|10:50pm]

Although it was mentioned obliquely before, Glie now exists. It receives the ADS-B data from RTL-SDR in 1090ES band and produces an image. I can hit refresh in the browser and watch airplanes coming in to land at a nearby airport in real time.

This stuff is hardly groundbreaking. Many such programs exist, some are quite sophisticated in interfacing to various mapping, geography, schedule, and airframe information services, as well as in the UI. This one is mainly different because it's my toy.

Actually, the general aim of this project is also different, because unlike most stuff out there, it is not meant to be a surveilance tool, but to provide a traffic awareness readout. No persistent database of any kind is involved. No map either. Instead, I'm going to focus on onboard features, such as relative motion history (so one can easily identify targets on collision course).

But mostly, it's for fun and education. And already Glie is facing a few technical challenges:

  • The best orientation for onboard display is "nose-up" (obviously). However, one can only derive a "track-up" orientation from a GPS. This is obviously wrong in case of Glie flying on a helicopter that can fly sideways. It is less obviously wrong in case of a crosswind, but can greate a significant distortion. To get the nose direction, I have to acquire a compass readout, which seems quite challenging. It's not like common airplanes have AHARS sockets under panels.
  • I really want the graphics anti-aliased for better looks and finer precision, but I have no clue how to accomlish it. The standard TIS-B symbology essentially requires it, too, so I'm stuck with ad-hoc diamonds.
  • The darn FAA split ADS-B in the U.S. into two bands: 1090ES and UAT. The 1090 is a solved problem in RTL-SDR space (although performance of receivers is not excellent and I'm thinking about building semi-hardware receivers in the future). However, UAT has 1 mbit/s data rate and RTL-SDR fails miserably on its face when trying to deal with it. I probably need something like Ettius GNU Radio receiver, which is expensive - $800 and up, last I checked. Unfortunately, it looks increasingly that all the interesting traffic is going to follow UAT route, come the 2020 A.D. (the year of ADS-B manate).

For now I'm going to take it easy and play with what I have, aiming for some kind of a portable system. Perhaps someone develops an open source UAT receiver on a practical platform in the meanwhile.

[link] 2 comments|post comment

OpenStack is dead, says a defector [03 Dec 2013|11:09am]

Saw a pretty funny article today (thanks to Matt Asay's Google Juice). To begin, take a healthy gulp of petty historic revisionism:

Ceilometer’s quality is bad, even by OpenStack standards. [...]

What I came to understand is that solving metering was not the primary motive. No one really cared about that. Certainly not in the way one would approach a project they intended to deploy and operate. The primary motivation was to have a project so that someone could be a Project Technical Lead (PTL).

I'm here to tell you that the above is a bald-faced lie. In reality, there was a need to have metering, and people moved on to fulfill it. Undoubtedly Ceilometer's progress lifted some careers, while retarding others, but to assign "primary motivation" to the whole project this way is not truthful. If it were a random voice on the Internet, I'd call it "misinformed", but since the guy started the whole thing by establishing his "I was there" credentials, he can only be lying.

That said, Ceilometer's code may be bad, I can't know. Ask Eoghan for a state of the union on that.

One other funny thing was this:

How many engineers do you think are working on AWS? GCE? How many of those committers will be the ones responsible for the performance and failure characteristics of their code? How many of those committers are dedicated to producing a world class service bent on dominating an industry? There has been some interesting, and even impressive work dedicated to improve code reviews and continuous integration, but that should not be confused with a unified vision and purpose. [...]

This is the same "Cathedral vs Bazaar" thing that RMS and ESR seemingly settled decades ago. Is Openstack too "organic" in its development? Ironically there never was any unifying vision and purpose behind AWS either (or Linux, for that matter [1]), in terms of this article. S3 survived and prospered despite Vogel plugging Dynamo, for example.

I came to regard Openstack as the best meritocracy in action. People who enjoy politicking end on various committees. People who write excellent code for a major impact end driving core projects. That's how Russell Bryant ended in Nova. Or how Peter Portante leapfrogged me for Swift core: why let seniority lead when ability does better? But what happens to people who lose this competition? Mediocre engineers, managers, and politicians -- the vast majority of contributors? Usually they end drifting around the periphery, sometimes starting projects, sometimes contributing here and there, but mostly responding to interests in the day jobs.

That's nothing wrong with that model, it's what happens anywhere from AWS to Linux, and it cannot spell the doom of OpenStack. Something else may, but not this.

So, overall, the article is funny to read and it may point out at some problems. I, for one, also think that Glance should not have existed. Instead, the image registry should have been in our main DB and image storage should have been in something better suited, like Swift. I am far from assigning the blame for Glance to jockeying for PTL position, because exactly the same naive mistake was committed in Aeolus, by a high-profile architect, who was supposed to provide the unified vision and purpose.

[1]As I heard it related by Alan Cox, various industry people pestered Linus for the vision and purpose. His answer always was "world domination and penguins".

[link] post comment

Chuck Thier on Swift at RAX [11 Nov 2013|08:45pm]

From the Youtube video:

Size is more than 85 PB, number of objects not mentioned (unless I missed it). Processes about 80 million requests each hour (from a half-joking remark). At least they revealed one number, which is welcome.

Nodes are 90 drives of 3TB per box, 10G network. Using SSD for Acct/Cont.

Switched from Pound to HAproxy, using Intel hw SSL termination. Maybe I should retire Pound from Fedora, too?

One thing I noticed is that Swift was a cathedral: driven by real-life requirements, 5 people sat in little room for 9 months, wrote 10,000 lines of killer code. Only then it was open, included into OpenStack, etc. Would would ESR say.

Also... The failure rate of hard drives is 10% per year! That includes the one in your laptop.

[link] post comment

prelink 2007-2013 [26 Oct 2013|09:43pm]

Before we celebrate the death of prelink properly (no more insufferable cron jobs), let us toast its excellence, particularly in robustness under very difficult circumstances: when failure cannot be tolerated. A mistake in roto-rooting your libraries means the box fails to boot -- or worse. I want my code be like Jakub's et.al.

Throughout these years I felt free to rpm -e prelink and expected everything working fine, including all the yum upgrades.

I think the main reason prelink died is that it attacked the problem of optimizing bad software: mainly making OpenOffice and Firefox start quicker. Once bad software became good, prelink faded. Its benefits in the age of Python are not as great and it helps not at all if you do not abuse shared libraries. The lesson here is, you cannot really fix bad sofware with a thin wrapper of good software.

And now, yay.

[link] 1 comment|post comment

The killer colon of Lennart [23 Oct 2013|11:53am]

It appears that we have a little problem in OpenStack Swift on Fedora 21: evey log entry has "journal:" inserted into it, so analysis scripts blow up. Before, it loooked like so:

Oct 10 23:54:20 rhev-a24c-01 proxy-server 10.10.55.128 127.0.0.1 11/Oct/2013/03/54/20 GET /v1/AUTH_test%3Fformat%3Djson HTTP/1.0 200 [....]

Now, it looks like so:

Oct 10 23:57:49 kvm-rei journal: proxy-server 192.168.128.11 192.168.128.11 11/Oct/2013/03/57/49 GET /v1/AUTH_t1%3Fformat%3Djson HTTP/1.0 200 [....]

The word "journal" points to the likely culprit, and indeed the problem is that Systemd Journal v.208 intercepts all system logging, parses it, and attempts to find something it calls "identifier". See src/journal/journald-syslog.c: server_process_syslog_message, syslog_parse_identifier. An acceptable identifier is a word that ends in a colon, and Swift does not end identifiers with colons.

If no identifier was identified, Journal adds its own process name, hence "journal:". It then packs the parsed message back into a syslog message and forwards it to rsyslog using an excessively tricky backchannel that involves temporary per-user files and inotify (le sigh).

For added bizzaredness, Systemd involves a provisioning to forward syslog messages as-is to a system logger, using a normal socket, only renamed, but for some reason rsyslog does not listen where $SystemLogSocketName instructs it. So, Systemd dutifuly forwards, the message is dropped by kernel, then the above farce proceeds ahead. The rsyslog listened on /run/systemd/journal/syslog in Fedora 19, but not in 21.

A simple cure would be kneel before Lennart and add the colon to Swift logging. I filed a Gerrit review to do just that, but I am not too optimistic. Although it keeps the word count, it adds the colon to the server name:

Oct 10 23:57:49 kvm-rei proxy-server: 192.168.128.11 192.168.128.11 11/Oct/2013/03/57/49 GET /v1/AUTH_t1%3Fformat%3Djson HTTP/1.0 200 [....]

From there, the colon leaks into whatever webpage or report the operators' scripts generate, so they need fix-ups to strip colons, however trivial.

IMHO, the best alternative would be to make Swift log directly into files, like Keystone and other OpenStack services do. This way we also avoid the unpleasant problem of double-logging due to stock rsyslog sending everyting above *.debug into /var/log/messages. But for that we have to implement HUP rotation, while SIGHUP is already used for graceful shutdowns... Seems suboptimal.

UPDATE: A week later, Lennart commented:

It's rsyslog which adds the "journal:" string in there, not systemd or the journal itself. We actually are very careful to forward the exact same syslog datagram we recieved on to any other running syslog daemon. Except that rsyslog doesn't make use of this forwarding scheme, and goes directly to the journal files and recreates the original message on its own way from that, apparently in a broken way. I mean, actually we tried very hard to not people piss of people like this, because we don't really like reading bullshit stories like yours. Alas, it didn't help, ignorants successfully blame us for everything anyway.

Summary: don't bitch at us. Bitch at the rsyslog people. Or even better: don't bitch at all, do your homework, and then file a bug against rsyslog.

I don't appreciate all your hateful rethoric btw. But I guess that's the point of it.

[link] 4 comments|post comment

OpenStack Swift drops WebOb bis [30 Sep 2013|10:00am]

A tweet by John, Swift PTL, reminded me about something else on to topic of painful dependencies: I managed to get rid of WebOb in Keystone middleware that Swift Proxy pulls. This allows much easier packaging in Fedora. But the patch took 8 iterations. At times I wasn't sure that between Dolph and Brant it would be possible to get in at all.

We dropped WebOb from Swift a while ago, but this was in a different repository, so nobody thought about it until I dropped "Requires: python-webob" from openstack-swift and people noticed that Keystone install traceback in Proxy.

[link] post comment

navigation
[ viewing | most recent entries ]
[ go | earlier ]