tag:blogger.com,1999:blog-1395402578527076682024-03-08T10:26:37.998-05:00Clock's MindRants, and Ramblings about various things - tech, law, etc. I've been frustrated with Slashdot's unwillness to accept some posts, so here they come...TemporalBeinghttp://www.blogger.com/profile/06247647473502902350noreply@blogger.comBlogger50125tag:blogger.com,1999:blog-139540257852707668.post-10663753819799867342023-04-12T15:30:00.005-04:002023-04-12T15:39:54.852-04:00Stack-In-A-Box... now available for GoWhile I was working at Rackspace I built an API client testing package called <a href="https://pypi.org/project/stackinabox/" target="_blank">Stack-In-A-Box</a> which provides the ability to test calls against foreign APIs without going to the network which you can see below using the built-in HelloService API example.
<script src="https://slashdot.org/slashdot-it.js" type="text/javascript"></script>
<script>hljs.initHighlightingOnLoad();</script>
<div>
<br />
</div>
<pre>
<code class="python">import unittest
import httpretty
import requests
import stackinabox.util.httpretty
from stackinabox.stack import StackInABox
from stackinabox.services.hello import HelloService
@httpretty.activate
class TestHttpretty(unittest.TestCase):
def setUp(self):
super(TestHttpretty, self).setUp()
StackInABox.register_service(HelloService())
def tearDown(self):
super(TestHttpretty, self).tearDown()
StackInABox.reset_services()
def test_basic(self):
stackinabox.util.httpretty.httpretty_registration('localhost')
res = requests.get('http://localhost/')
self.assertEqual(res.status_code, 200)
self.assertEqual(res.text, 'Hello')
</code>
</pre>
<div><br /></div>
<div>This does require that the someone build out a mock of the API being called; however, that implementation can be as simple or as complicated as you want to make it. For example I implemented the <a href="https://github.com/TestInABox/openstackinabox/blob/master/openstackinabox/services/cinder/v1/volumes.py" target="_blank">OpenStack Cinder V1 API </a> just enough for the project I was working on at the time - super simple, and very dumb. Comparatively I implemented the <a href="https://github.com/TestInABox/openstackinabox/tree/master/openstackinabox/services/keystone/v2" target="_blank">OpenStack Keystone V2 API</a> and <a href="https://github.com/TestInABox/openstackinabox/tree/master/openstackinabox/services/swift/v1" target="_blank">OpenStack Swift V1 API</a> with very complete implementations. The point is, it's all up to you as the developer or mock API implementer to decide how much effort to put into it and what to do.</div>
<div><br /></div>
<div>As I got into Golang I wanted to have a similar technology in Go. For applications like C/C++ this isn't too feasible since there's no standard HTTP library used by everyone. Python makes it easy since mocking is built into the language and the functionality can be integrated into any part of the stack - there's even competing libraries for doing so, and the Python Stack-In-A-Box is setup to support them. Go, however, works differently. It's not as advantageous as Python for doing this, but it's far better than the C/C++ since there's a defined interface and common client object.</div>
<div><br /></div>
<div>So what does this look like in Go? The above Python example translates into <a href="https://github.com/TestInABox/gostackinabox/blob/master/examples/hello/helloworld_basic_test.go" target="_blank">this Go code</a>:</div>
<pre>
<code class="go">package hello_test
import (
//"errors"
"net/http"
//"io"
"testing"
"github.com/TestInABox/gostackinabox/examples/hello"
"github.com/TestInABox/gostackinabox/router"
)
func Test_HelloWorldBasicService(t *testing.T) {
t.Run(
"GET",
func(t *testing.T) {
// configure the HTTP Client
r := router.New()
http.DefaultClient.Transport = r
// create a Go-Stack-In-A-Box service
hwbService, hwbServiceErr := hello.NewHelloWorldBasicService()
if hwbServiceErr != nil {
t.Errorf("Failed to create hello World Service: %#v", hwbServiceErr)
}
// expected result
expectedBody := "basic hello world!"
expectedBodyLength := len(expectedBody)
expectedStatusCode := 200
// register it with the client
serviceUrl := "https://hello.world"
registerErr := r.RegisterService(serviceUrl, hwbService)
if registerErr != nil {
t.Errorf("Failed to register hello world service: %#v", registerErr)
}
// attempt the service call
resp, respErr := http.Get(serviceUrl)
if respErr != nil {
t.Errorf("Error making HTTP Call: %#v", respErr)
}
if resp != nil {
// validate the status code response
if resp.StatusCode != expectedStatusCode {
t.Errorf("Unexpected status code: %d != %d", resp.StatusCode, expectedStatusCode)
}
// validate the body length
if resp.ContentLength != int64(expectedBodyLength) {
t.Errorf("Unexpected body length: %d != %d", resp.ContentLength, expectedBodyLength)
}
// access the response body and validate it
bodyData := make([]byte, 2*expectedBodyLength)
readDataLength, readDataErr := resp.Body.Read(bodyData)
if readDataErr != nil {
t.Errorf("Unexpected error reading data: %#v", readDataErr)
}
t.Logf("Data Length: %d, Data: %#v", readDataLength, bodyData)
if readDataLength != expectedBodyLength {
t.Errorf("Unexpected data length read: %d != %d", readDataLength, expectedBodyLength)
}
// read gives back a byte array, to it must be converted to a string to convert
strBodyData := string(bodyData[:readDataLength])
if strBodyData != expectedBody {
t.Errorf("Unexpected body data received: \"%s\" != \"%s\"", strBodyData, expectedBody)
}
} else {
t.Errorf("Unexpected nil response")
}
},
)
}
</code>
</pre>
<div>The <a href="https://github.com/TestInABox/stackInABox/blob/master/stackinabox/services/service.py" target="_blank">Python</a> and <a href="https://github.com/TestInABox/gostackinabox/tree/master/examples/hello" target="_blank">Go</a> "Hello World" implementations are also quite comparable as well.</div>
<div><br /></div>
<div>So now you can easily re-use API mocks using a standardized interface and interception method in both Python and Go. And even better - the Go version is simpler because it's using purely built-in functionality and additional 3rd party tooling is not needed.</div>TemporalBeinghttp://www.blogger.com/profile/06247647473502902350noreply@blogger.com0tag:blogger.com,1999:blog-139540257852707668.post-21496457343987706362020-12-16T02:04:00.001-05:002020-12-16T02:04:46.822-05:00A Response to Programming Language Comparison...<div>A friend asked my opinion on an <a href="https://medium.com/better-programming/modern-languages-suck-ad21cbc8a57c" target="_blank">article</a> that dealt with comparing languages. After reading the article I decided it was best to put my response here in long form versus trying to fit it into a post on Facebook.</div><div><br /></div><div>My personal background is 15 years in C/C++, 6+ years in Python, and I've been operating mostly in Golang for the last 7+ months (new job). When I first got into computing I learned a ton of languages (DOS Batch, QBasic, Visual Basic, Pascal, C++, JavaScript, HTML/CSS, and the list goes on). I've worked with Java, Kotlin, C#, MS Managed C++, and others as well over the years. I've even dabbled in x86 Assembler.</div><div><br /></div><div>I've written applications from basic device monitoring over serial ports on Windows 2000 for SATCOM; to near-real-time railway integrity measurement systems; and the last few years been working in Backend APIs for Cloud Services. Throughout all of that I've written GUI interfaces, command-line interfaces, libraries, protocols, and micro-services; and at times managed projects north of 500k SLOC on my own.</div><div><br /></div><div>All this is to say, I'm not new around the block - I've got some experience to work from across a wide range of the software industry and in various languages, frameworks, etc. Now, on to the article...</div><div><br /></div><div>In general I like the article. It is well reasoned; but still not without it's own bias and subjective additions. But then, probably any discussion on this topic will be that way right now; my own commentary below included. That said, it's one of the better articles on the subject I've come across as most set out to convince someone to use a specific language.</div><div><br /></div><div><u>What Characteristics Really matter?</u></div><div><br /></div><div>The article correctly points out that popularity and earning potential are not good measurements, and then goes on to try to establish various metrics the author finds useful:</div><div><ul style="text-align: left;"><li>Type System</li><li>Learning Effort</li><li>Nulls</li><li>Error Handling</li><li>Concurrency</li><li>Immutability</li><li>Ecosystem/Tooling</li><li>Speed</li><li>Age</li></ul></div><div><br /></div><div>All of these are generally good to academically compare a language, but none actually matter in answering the question because at the end of the day there is only one thing that matters to a Software Engineer: Can I get work done?</div><div><br /></div><div>If one cannot get work done, then no matter the metrics or properties of a language (or tool) it's useless. This is why many often move to language popularity as a metric - the theory being that because a language is popular (or not) one can derive how easily one can get work done in that language. However, that measurement also fails the question because whether or not any given individual can get work done using a specific tool depends on many things most of which are highly personal.</div><div><br /></div><div>As an example; I've been working to try to build some Android applications for a few years with varying degrees of success. I haven't yet gotten any where near having something I would even consider giving to someone else to use. Throughout that time I've tried both languages I knew well (Python) and ones I needed to learn (Java, Kotlin, Go). Java is highly popular in the Android world; however, for me it's a useless language as it's not one I can get work done in. Sure, I can maintain something someone else started; but it's not one that works well for me to start new projects and get work done. Kotlin is a little better; but the tooling around it provides its own hurdles (especially with a community that is highly centered around using specific IDEs and doesn't look kindly on folks that don't). I'm quite familiar with Python too, but getting a dev environment setup properly has been quite tricky there. Interestingly, Go has been the most successful for me thus far, but I'm still early on there with my application experiments, but it's very very promising.</div><div><br /></div><div>Okay...so we answered one question. Now let's look at the author's various chosen metrics.</div><div><br /></div><div><u>Type Systems</u></div><div><br /></div><div>The author makes some good points, and arrives at a good conclusion:</div><div><br /></div><div> We also have to keep in mind that people tend to put too much importance on type systems. There are things that matter far more than static typing, and the presence or lack of a type system shouldn’t be the only factor when choosing a language.</div><div><br /></div><div>I can wholly stand behind that conclusion. C/C++ are highly type oriented; while Python is nearly typeless. I'd argue that the recent addition of typing to Python (via Type Hinting) and JavaScript (via TypeScript) actually are detrimental to the languages.</div><div><br /></div><div>Really, what matters more here is the developers ability to be disciplined in what they do and how they do it. A good developer won't need a type system, but will also find typing useful in appropriate problem spaces. So arguments can be easily be made both ways.</div><div><br /></div><div><u>Learning Effort</u></div><div><br /></div><div>This is perhaps a very good metric as it means a fast method to "getting work done". If it takes too long to learn, then one won't be getting work done. On the other hand, being too simplistic (e.g QBasic, Visual Basic) also has it's own problems.</div><div><br /></div><div><u>Nulls</u></div><div><br /></div><div>This is a discussion much like discussions about `Goto`. It's a misnomer, and not a good metric. Why?</div><div><br /></div><div>Nulls exist for a good reason - to denote that something doesn't exist. In C this was typically represented by the NULL or zero value (literally the integral value of zero) as that is what hardware recognized as an invalid value for various things. For instance, in Assembler jumping to address 0x0 is valid, but that's typically a reserved address for low level operating system functionality that applications shouldn't be touching. This is enforced through memory protection functionality that is built into the hardware - the computer processor and memory management unit (MMU). Later languages added explicit language pneumonics for it - nil in Go, null in Java, nullptr in C++11 - which formalizes this and makes it easier to check for since the application is no longer dependent on a value that may be valid (0) when it's really not. However, the state of application when one comes up is a very very valid one and one that must be handled.</div><div><br /></div><div>As the author points out, Type Systems fail with Nulls; but it's not due to Nulls breaking the type system as the author indicates. Rather it's because it's a valid code path the application may follow; a valid state the application may for whatever reason get into. The Type System detects this. Static Analysis tools find this too; many can often warn about the potential as well if the developer failed to put a check in.</div><div><br /></div><div>NOTE: Many bad memory references are due to the application getting into a state that the developer did not expect; often due to Null references. The simple discipline of validating expectations in code resolves this; and doesn't actually add sufficient overhead to be worth skipping. In fact, this is the one place I'd argue that the developer has a responsibility to add it - in any language - and then allow the compiler/interpreter to decide how to apply opimization; even optimizing them out if it can prove that the check isn't necessary.</div><div><br /></div><div><u>Error Handling</u></div><div><br /></div><div>How one handles errors speaks a lot to how mature a developer they are, and how disciplined they are regardless of error handling system. One of the biggest mistakes I've seen is that developers don't sufficiently check and handle errors. Being disciplined and handling errors as closely as possible to where they occur gives the application the best chance of recovering from the error. Failure to handle an error means the application will react badly and lead to a bad user experience.</div><div><br /></div><div>The problem with Exceptions is that developers tend to only handle those they know about and care about and then leave anything else to be handled by something higher in the call stack. However, that something higher likely has no possible way to recover and will thus end up terminating the thread, sub-process, or even the application as a whole as a result. Exceptions are nice because they don't muck with the return value system; but the general use and lack of handling is a bigger point against them.</div><div><br /></div><div>Without exceptions one must instead denote an error some other way. In C/C++ this is often done via an error code. libc historically had the global errno value; but that's extremely limited. Instead developers needed to build it into the the APIs in some form.</div><div><br /></div><div>One nice form is what Go did with allowing multiple results, one of which is an `error` type.</div><div><br /></div><div>That said, the biggest thing here is to be consistent. If you're going to use Exceptions handle them, and use them every where. If you're not, don't allow Exceptions at all. But in either case handle the errors - all of them - no matter how unlikely you think they may be as it will matter to your user.</div><div><br /></div><div><u>Concurrency</u></div><div><br /></div><div>Providing functionality for working with Concurrency can certainly help. Too much and it can hinder too. The bigger issue with concurrency is not whether a language provides native support for Concurrency but how developers handle working with things concurrently, and asynchronously. That is actually one of the big hurdles (next to pointers) that developers need to learn to deal with on their path to becoming a good software engineer.</div><div><br /></div><div>NOTE: Java provided capability for Concurrency for a long time prior to the current state of computer with multi-cores; however, that didn't stop concurrency from breaking the language. The fall of Java started in part because of the move by Intel from its processors always going faster to being multi-core - due to the old Java mantra of always throwing more hardware at a problem to solve performance issues and the initial multi-core processors have cores that were significantly slower then their single core counterparts even though their overall performance was at or better than those single core processors. Later issues with Security and Oracle hit at about the same time; all of them together has led to a massive change from Java being the language to use to being one that, like Cobol, is mainly used in relation to existing products.</div><div><br /></div><div><u>Immutability</u></div><div><br /></div><div>Immutability has it's place; but like Type Systems it's also overrated. There are better ways to solve the issue being it's trying to solve, and most of that is around the discipline of the developer.</div><div><br /></div><div>React (aka ReactJS) relies on Immutability as it's easier in JavaScript than writing better a better framework. In fact, JavaScript's biggest issue is the frameworks available to it; but that's another discussion altogether.</div><div><br /></div><div>Going back to the directive of "getting things done", Immutability often gets in the way. More on that kind of thing later.</div><div><br /></div><div><u>Ecosystem/Tooling</u></div><div><br /></div><div>This is important as it drives the whole "getting things done" issue.</div><div><br /></div><div>Let's use the example of Kotlin.</div><div><br /></div><div>If one is able to use all the tools that the Kotlin core devs like to use, then the tooling for Kotlin is great. However, once you leave that space it quickly falls apart. Kotlin is highly desired and built around using IntelliJ (which is made by the same folks that created Kotlin); as a result instead of explaining how to find libraries they say "use this shortcut". Likewise, there's little to no instruction for building things via a Makefile as it's expected to just use the Build button in the IDE. Want to use Makefiles or other build systems? Want to use Vim? Emacs? Notepad++? You're on your own.</div><div><br /></div><div><u>Speed</u></div><div><br /></div><div>The author calls out a few things which do matter:</div><div><ul style="text-align: left;"><li>build time</li><li>run-time</li><li>load-time</li></ul></div><div><br /></div><div>However, these are often more directly impacted by how a developer designs, architects, and lays out the application.</div><div><br /></div><div>C and C++ are generally thought to have bad build times. However, often this is because of tossing too much at the compiler. On Windows it's common to simply use `windows.h` to get all the Windows APIs instead of the 3 or 4 actual Win32 headers one needs. As a result, folks started using `stdafx.h` to do pre-compiled headers which end up with their own set of problems. Many C/C++ developers on all platforms have ended up using pre-compiled headers as it's sold as a solution to build times. However, a better solution is to simply limit what headers you need to start with; but this requires more discipline on the part of the developer - doing so yields far more dividends to the integrity of the application, increase in build times, etc than pre-compiled headers ever could.</div><div><br /></div><div>Similarly load-time is driven by how a program is architected. That's not to say that it won't be affected by language infrastructure (f.e the Java Virtual Machine), but that there's often a lot more that is happening as a result of the application design and architecture instead. One symptom of this is when an application (f.e Microsoft Office, OpenOffice, LibreOffice, Chrome) employs a background process to pre-load the application - it's always running; while that speeds up load time of the application it also slows down the computer overall as there is simply more resources being used all the time.</div><div><br /></div><div><u>Age</u></div><div><br /></div><div>This is another misnomer, and speaks more to what younger developers are taught to rely on than anything else.</div><div><br /></div><div>Younger languages will have features that developers are taught are better - f.e garbage collection - things that protect developers from themselves. However, that doesn't necessarily make them better. Often these "features" become problematic.</div><div><br /></div><div>For example, want to use Java or Golang in real-time (or even near real-time) or high performance environments? Don't rely on the garbage collector; instead allocate everything on the stack at startup and *never* allocate anything. This is true also if you're using C or C++. Why? Garbage collectors will introduce random performance penalties that will have unexpected consequences at unpredictable times. Some GC tooling will give some control over that; but language GC functionality will always be for the most general case.</div><div><br /></div><div><u>Language Space: Functional vs Procedural vs Object-Oriented</u></div><div><br /></div><div>Mid-way through the language comparison the author takes a detour in language spaces, with an obvious bias towards Functional Programming (Haskell, etc). Again, this is a misnomer and not an easy thing to compare against.</div><div><br /></div><div>Why? Some languages are strictly one or another. For example, Haskell is strictly Functional. However, others (C, C++) cross several of them depending on how one wants to structure one's program.</div><div><br /></div><div><u>Language Comparison</u></div><div><br /></div><div>I'm not going to go into the various comparisons based on the criteria the user specified simply because the analysis is flawed and incomplete. There is also an obvious bias towards functional languages - only functional languages get above a 2 star.</div><div><br /></div><div>Certainly learn a variety of languages. Learn what works for different problem spaces and use it appropriately. But most importantly, use what helps you get work done because if you're not getting work done your employer won't care what language your using is; they will, however, care about the time you're taking to do stuff - which means time to market, which means sales, and which means customers.</div><script src="https://slashdot.org/slashdot-it.js" type="text/javascript"></script>TemporalBeinghttp://www.blogger.com/profile/06247647473502902350noreply@blogger.com0tag:blogger.com,1999:blog-139540257852707668.post-25153756118194334952017-02-01T17:21:00.000-05:002017-02-03T09:50:41.511-05:00Why I will never Squash or Rebase in Git...I'm a versioning purist. I'll admit that. I love to be able to access version history in the VCS systems I work with. May be it's just my Subversion (SVN) background, but I like being able to read through the history easily and find the logic for what changed and why in the log messages associated with commits. (And if the log messages don't contain that information then they're not *good* log messages.) So I am heavily against Squashing and Rebasing in Git.<br />
<br />
GitHub recently introduced the ability to do to Sqashed Commits on merge, and some of my team members decided to give it a try. However, it was immediately apparent that Squashing is evil. Why? Because it really hurts being able to track stuff and keep a clean working copy locally.<br />
<br />
My general local development works something like this (assuming the working copy has already been cloned and upstream setup):
<br />
<blockquote>
$ git checkout master<br />
$ git fetch upstream<br />
$ git merge upstream/master<br />
$ git checkout -b my_working_branch<br />
...<br />
do stuff<br />
...<br />
$ git commit<br />
$ git push origin my_working_branch<br />
...<br />
get it merged remotely<br />
...<br />
$ git checkout master<br />
$ git fetch upstream<br />
$ git merge upstream/master<br />
$ git branch --merged<br />
...<br />
look for my_working_branch<br />
...<br />
$ git branch -d my_working_branch
</blockquote>
Squashing creates several issues:<br />
<br />
<i>1. Detection of merges break</i><br />
<br />
For example, if you squash a branch on merge the last couple steps above won't work. Git can't tell that the branch was merged b/c it can't track the hashes of the branch.
This means that you now have to do:<br />
<br />
<blockquote>
$ git branch -D my_working_branch
</blockquote>
This means it now becomes extremely easy to remove the <b>wrong</b> branch.<br />
<br />
This is also true of specific commits if you squash before pushing and are cherry picking between branches, etc - so it's not isolated to just branch merges.<br />
<br />
<i>2. Removes valuable history and insight</i><br />
<br />
History contains details. Logs contain details. This is very important information when trying to determine why someone did something the way they did - e.g when trying to find and fix a bug.<br />
<br />
Git has an awesome feature called <b>git bisect</b> that allows you to find the exact commit a bug was introduced in. Squashing means you can only find the total group of commits that introduced the bug, not the commit itself. You now have a take it or leave it for the entire group. You also lose any contextual information regarding the specific commit and why it may have happened.<br />
<br />
<i>3. It foobars anyone tracking what you're doing</i><br />
<i> </i><i> </i><br />
When using a VCS system code is meant for sharing, and once you share it others (f.e upstream maintainers, co-maintainers, etc) may <i> </i>checkout your branches to monitor progress if they are interested in what you are doing. You do not necessarily have no clue who these people are either. However, squashing and rewriting history will screw up their ability to cleanly track your work.<br />
<br />
You also make it problematic for yourself, especially if working on the same codebase on multiple systems (f.e laptop, desktop, server) since you will have to do a force push (<i>git push origin my_working_branch --force</i>) if you squash after pushing it, which means you'll have the same issues as others if you need to keep others places in sync, not to mention you may <b>lose</b> your own work in that case too if, for example, you push up one change set from your laptop and another without merging from your desktop. What got pushed from the first system (e.g the laptop) via a forced push will be lost when the second system (e.g the desktop) pushes up the changes.<br />
<br />
<br />
Git Rebase runs into many of these same issues, even exasperating some of them (#3). Rebasing also runs into the following issues:<br />
<br />
<i>1. Your own branch history makes less sense.</i><br />
<br />
That is to say, you lose the context of the changes in your branch by moving about the commits. The reason why you did something in a commit has very much to do with what the code looked like prior to that commit. Rearranging the history so newer commits appear after merges removes that context.<br />
<br />
<b>2. Sharing branches becomes that much harder.</b><br />
<br />
This is a really, really big emphasis on #3 above regarding squashing breaking sharing branches with others and even yourself. Only it happens on the merge level instead of the push level.<br />
<br />
<br />
Now is this not to say that there are not use for these features at times. There are, but they should be used with extreme caution and with extreme rarity. The smaller the project, the less likely they should be used.<br />
<br />
For example, I can certainly understand why the Linux Kernel maintainers may use these features - with dozes of people sharing code and consolidating it down as it moves upstream. However, that is a project that has numerous layers where upper layers don't need to care as much about the details of the lowest layers, so squashing and rebasing can happen at controlled points between the layers and everyone - tracking at their layer - will be able to track what's going on more easily. Bugs are tracked and fixed at the various layers. Most projects are neither this size (millions of lines of code, contributed by tens of thousands of people), complexity, or have such a large hierarchy of contributors (subsystem and release has a person dedicated to its maintenance). <br />
<br />
In the end, you really need to care more about history than most people do - especially in small projects, and even more so in projects that may have high turnover of its contributors, and even more so when turnover leaves little (if any) time for transfer of knowledge.<br />
<br />
History is always important, and you may never know how important it is because it may be the person several times removed from you - long after you have moved on to better things - that is maintaining the code and trying to figure out why you did something that needs to know the history and details. Always write code and use history for <b>that</b> person, not yourself. They won't likely be as smart as you either.<br />
<br />
The above is, for now, my current list of well known reasons why not to use Rebase and Squashing. I'll more as they come up.<br />
<br />
<i>UPDATE:</i> If you ever have to force push, then you did something wrong.TemporalBeinghttp://www.blogger.com/profile/06247647473502902350noreply@blogger.com0tag:blogger.com,1999:blog-139540257852707668.post-82206316960384309342016-04-14T17:30:00.002-04:002016-04-14T17:51:31.811-04:00Docker and Network SecurityDocker is great. Containers are awesome. But we still have to beware of security with them.<br />
<br />
I have been getting more and more into Docker and Linux Containers of late. They make the old schroot functionality extremely easy to use (though the same caveats apply), but also make distributing that functionality extremely easy, and building it very reproducible.<br />
<br />
Docker Compose takes it a step further, enabling multiple containers to be built and interlinked via the Docker Network. Just don't forget about your firewall.<br />
<br />
On my dev boxes, I have a firewalls that by default rejects all traffic and then allows SSH so I can work on them. I've been using Docker containers on one of them lately, and noticed that some of the containers had requests from outside sources. That shouldn't have been - I didn't enable the firewall to allow that. So I checked IPTables, and sure enough there it was:<br />
<br />
<pre>
root@dev:~/project# iptables --list DOCKER
Chain DOCKER (1 references)
target prot opt source destination
ACCEPT tcp -- anywhere 172.17.0.2 tcp dpt:6379
</pre>
<br />
<br />The problem is the source column. Since it is set to "anywhere" any traffic coming from any IP or Interface can access the container. That's not what I wanted.<br />
<br />
After asking around, there's an "--iptables=false" flag that can be provided to the Docker service. Using it prevents the IPtables rule from being entered at all. But then the container can't be accessed. It's isolated unless I write the rules myself - something I also don't want to do since it's more likely that I would get them wrong than if Docker did it.<br />
<br />
From a security perspective, the above should be the following:<br />
<br />
<pre>
root@dev:~/project# iptables --list DOCKER
Chain DOCKER (1 references)
target prot opt source destination
ACCEPT tcp -- 127.0.0.0/24 172.17.0.2 tcp dpt:6379
ACCEPT tcp -- 172.17.0.0/16 172.17.0.2 tcp dpt:6379
</pre>
<br />
<br />
This limits all traffic to the containers to (a) anything from local host, and (b) anything from within the Docker Network. Alternatively, it could be resolved by using the Docker Bridget Network devices (e.g docker0) and the loop back interface (lo) so that anything abound to them would work. Either way it would be a dramatic security improvement over the current situation.<br />
<br />
So here's an example.<br />
<br />
You have an application that requires a database and provides a RESTful API. You want to use a tool like nginx to terminate SSL connections. In the normal case only the SSL connection port would be exposed to the public for use - both the ports for the database and for the RESTful API are to be hidden inside the container network, but they have to be exposed to each other so that all the containers can talk to each other. You dockerize all these. Then you check the firewall and see that all three are exposed to the public network.<br />
<br />
This issue is several fold:<br />
<br />
1. It's an issue for devs because they may be doing this on systems on random networks (if using a laptop) or publicly available systems (if using a cloud server). Nefarious actors can then target the devs and possibly learn about stuff that will eventually be in production, and know things you don't want them to know.<br />
<br />
2. It's an issue for deployments if you're not careful. The only ways to resolve it are (a) disable firewall modifications by Docker and manage it all yourself, or (b) put the entire system into a private network. This also assumes you actually have control to do that instead of using a service that just uses some specifications (e.g docker-compose.yml) to build things out and host the site for you.<br />
<br />
I've filed a Bug/Feature-request against <a href="https://github.com/docker/docker/issues/22054">Docker on the issue</a>. Hopefully we can get some attention and help to get this fixed and enable everyone to use Docker more securely - preferably by default, but even a non-default option would be an improvement.
<br /><br />
Just to be clear - does this mean you shouldn't use containers or Docker? Absolutely <i>NOT</i>. Just be careful when doing so, and take precautions when using it for development and especially for production deployments.
<br /><br />
<script src="http://slashdot.org/slashdot-it.js" type="text/javascript"></script>TemporalBeinghttp://www.blogger.com/profile/06247647473502902350noreply@blogger.com0tag:blogger.com,1999:blog-139540257852707668.post-58613442822760622082016-02-19T16:28:00.002-05:002016-02-19T16:28:40.680-05:00Releasing Python Packages with PBR...So it's been a while since I've had to release one of my Python-based projects and publish it to the PyPi distribution network. Publishing packages is generally really easy:<br />
<br />
<br />
$ python setup.py sdist build<br />
...<br />
Writing myproj-x.y.z.tar.gz<br />
...<br />
$ twine upload -r pypi dist/myproj-x.y.z.tar.gz<br />
<br />
However, I also use <a href="https://github.com/openstack-dev/pbr" target="_blank">OpenStack's PBR</a> (Python Build Reasonableness) as it makes the setup.py and related functionality very easy. However, it also complicates the above...<br />
<br />
$ python setup.py sdist build<br />
...<br />
Writing myproj-x.y.z-devNNN.tar.gz<br />
...<br />
$<br />
<br />
What to do?<br />
<br />
If you look closely at the documentation for PBR you can find some notes for packagers - http://docs.openstack.org/developer/pbr/packagers.html. Among these notes is a statement about the <a href="http://docs.openstack.org/developer/pbr/packagers.html#versioning" target="_blank">environment variable PBR_VERSION</a> - which is easy to overlook given the non-obvious link to the package your trying to release.<br />
<br />
In the end, you just have to use PBR_VERSION to get it right and bypass any version calculations PBR itself does like so:<br />
<br />
$ export PBR_VERSION=x.y.z<br />
$ python setup.py sdist build<br />
$ twine upload -r pypi dist/myproject-x.y.z.tar.gz<br />
<br />
And <span class="st">voilà it's the correct package for the version and now it's up on PyPi.</span><br />
<script src="http://slashdot.org/slashdot-it.js" type="text/javascript"></script>TemporalBeinghttp://www.blogger.com/profile/06247647473502902350noreply@blogger.com0tag:blogger.com,1999:blog-139540257852707668.post-85347150484317028052015-07-29T00:51:00.001-04:002015-07-29T00:51:40.880-04:00git vs svn - pulling in external repositoriesI have had the pleasure of using both Subversion (svn) and git. I came out of extensive use of Subversion, having administered repositories both personally and professionally for 8+ years. During that time I participated on the Subversion Users mailing lists, both seeking and providing advice. During which time I had upgraded many repositories from one version to another. Needless to say, I would say I am an expert in Subversion, at least versions 1.2 through 1.7.<br />
<br />
In 2013 I started a new position and the teams I have worked with since used git. I had been using git extensively since, and recently started implementing submodules in some newer git repositories. This post is a reflection of my comparison between Subversion's svn:externals system (as it was 2 years ago at least, which I doubt has changed much since), and git's submodule system.<br />
<br />
The end-result of the two systems are the same - pulling into one repository one or more other repositories so that it may use them, rely on them. This is favored model of mine when it comes to having public and private interfaces. I create a repository that contains the public interfaces, and all the various repositories implementing those interfaces pull the public interfaces repository in as a dependency. Interfaces internal to the library are kept in a separate section (f.e the <i>include</i> directory in the repository). The beauty of this model is that it allows you to create a consistent set of APIs that can be released; the implementing libraries can change their internals as long as they maintain the public interface. Further, it allows for things like file formats to be abstracted easily since the detailed information is hidden inside a project, not in the public interface - which can be as abstract as needed.<br />
<br />
That's the use-case, but what's the difference between these two very good version control systems in providing the requisite functionality?<br />
<br />
First, Subversion.<br />
<br />
Subversion provides a series of textual content properties in a repository that are, like everything else, versioned as part of the repository. Change a property, and it creates a new revision in the repository. To support the functionality discussed above, Subversion provides a property called "svn:externals". The "svn:externals" property consists of multiple lines; each line describing a repository and where to store it.<br />
<br />
Prior to version 1.5, the "svn:externals" property used one format that was specialized to the use case. In 1.5 and later, the format was revised to match that of the command-line "svn" interface. Furthermore, this change provided additional versioning capabilities. The original format had to specify a complete URL, just like one would to access a repository; the new format could continue to do so or one could use a relative URL format - meaning it in the same repository, just under a different portion of the tree.<br />
<br />
When one would checkout a repository with additional repositories in the "svn:externals" property, Subversion would also automatically pull all the repositories listed in the "svn:externals" property and place them in a the specified places. Once the checkout was done, you were ready to use the contents of the repository - build the software, etc. No additional steps necessary.<br />
<br />
Now, git.<br />
<br />
git provides the functionality through the "submodules" sub-command. The "submodule" sub-command has a series of its own sub-commands which perform most of the various tasks on the external sources. git itself controls the entire internal format. though the data is stored in a text file called <i><b>.</b>gitmodules</i>. git tracks the external source just like any other object - through its commit hash.<br />
<br />
However, unlike Subversion, git does not automatically pull the "submodule" repositories when the main repository itself is cloned - this requires an extra step.<br />
<br />
The solution? Projects using git create scripts that perform several of the git tasks automatically so that they don't have to remember them every time a repository is cloned.<br />
<br />
The difference for git is a fall-out of its design. Since a clone of a repository records all the information, and looks nearly identical to a working copy. A hosting server does not want to checkout every submodule, therefore it cannot be automatic.<br />
<br />
In many respects, I feel the functionality in Subversion to be superior in this area as it's a pain to figure out what to do and how to interact with the git submodules. Everyone talks about how the commands work, but no one really goes through the complete steps of using it.<br />
<br />
So, to clear the air a little on git submodules:<br />
<br />
<br />
1. To add a submodule:<br />
<br />
myrepo $ git submodule add git://repository/url.git foldername<br />
<br />
2. After checking out a repository containing submodules:<br />
<br />
myrepo $ git submodule init<br />
myrepo $ git submodule update<br />
<br />
3. To update a submodule:<br />
<br />
- the submodule can be managed just like any other git repository<br />
- when the submodule is in a state that is desired, just add it like any other git resources and commit it with the rest of the changes.<br />
<br />
git does help in that it provides a special "submodule" subcommand to run a command against every submodule - the "git submodule foreach" command. Thus the above can also be stated as:<br />
<br />
myrepo $ git submodule foreach init<br />
myrepo $ git submodule foreach update<br />
<br />
Well, hope this helps.<br />
<script src="http://slashdot.org/slashdot-it.js" type="text/javascript"></script>TemporalBeinghttp://www.blogger.com/profile/06247647473502902350noreply@blogger.com0tag:blogger.com,1999:blog-139540257852707668.post-41420782291603085982014-10-21T00:11:00.001-04:002014-10-21T00:11:14.986-04:00Home Owners Associations - America's Hypocracy<p dir="ltr">I am an American. I value my freedom. And yet so many American's give it up to non-governmental organizations ever day when they purchase a home - a house, a town house - that is part of a Home Owners Association (HOA).  HOA's typically have fees that range any where from a few 10's of dollars to hundreds or thousands of dollars a year.  The cost is one thing; but the freedom that is given up is in the rules.</p>
<p dir="ltr">The rules are set by your neighbors. By those who were part of the HOA, it's politics, etc prior to you buying the home. You may or may not be able to change those rules; and they're not typically subject to the courts. That is, unless you want to refuse to pay the fines they issue against you, accrue some bad credit, have a lein placed against the property (so you can't sell it), and then try to fight it in the courts if you can even get it there. All-in-all you're at the mercy of the HOA.</p>
<p dir="ltr">So why are there HOA's?</p>
<p dir="ltr">Well, some will argue that you have to have them to protect your property value. Huh?  Oh, they want to make sure the neighborhood continues to look nice. So they want to exert control over their neighbors to try to make the whole neighborhood look like what they think it should be.</p>
<p dir="ltr">So what's the problem if it's all about property value?</p>
<p dir="ltr">Well, it's not. It's about control, and control over other people whether they admit it or not.</p>
<p dir="ltr">How's that?</p>
<p dir="ltr">Well, suppose you own a boat. You are legally allowed to keep it on your property. But your neighbor thinks it is unsightly. They won't want to see a boat. So they get the HOA to pass a rule saying that boats have to be in the garage, behind a fence, behind the house, etc. Just so they don't have to see it. They've now impeded your rights in order to satisfy their power thirst.</p>
<p dir="ltr">But it doesn't stop there.</p>
<p dir="ltr">Some places go so far as to control how many plants you can have in your front yard. Or how many cars you can have in the driveway.</p>
<p dir="ltr">One HOA I ran across had some vandalism of the pool that was tracked to some underage kids. They then passed an HOA rule that any minor (e.g under 18) out on common property of the HOA (e.g walking on the sidewalk) after 10PM would be arrested for trespassing. Absolutely the wrong response, but one allowed under HOA rules, and enforced by contract law.</p>
<p dir="ltr">Now don't get me wrong - HOAs can have a purpose - taking care of common property that doesn't belong to any single home owner. But that should be all that HOAs are allowed to do. They should not be allowed to control what goes on on your property. That should only fall under the laws governed by the voters.</p>
<p dir="ltr">But aren't HOA's governed by "voters"?</p>
<p dir="ltr">Not like your county, municipal, or state lawmakers are. Nor are they governed by any politics beyond what little happens outside your small community. They're not answerable to the normal legislative processes, and chances are most of the community knows even less about what is going on in the HOA than they do about the municipality or county politics (which sadly is little enough as it is). Moreover, they're typically private meetings that are not open to journalists, only other HOA members, and therefore not open to the normal public scrutiny that every other legislative body has.</p>
<p dir="ltr">Moreover, you can't get out of them unless you sell your home.</p>
<p dir="ltr">Moreover since many towns don't want to take over the burden of extending their population, they won't allow contractors to have the newly built communities added to them. So then the contractor sets up an HOA; which can't be dissolved unless either a new town is set up or an existing town agrees to absorb the community (which, again they are reluctant to do).</p>
<p dir="ltr">All-in-all it's getting harder and harder to buy a home without an HOA unless you can buy a chunk of land and build it yourself; and even then you have to make sure that it's not part of a community being built out by a contractor that is sub-parceling the land you're buying. Even then, not every State has laws that you, as the buyer, has to be informed about the HOA prior to sale - which has landed many in the position of having an HOA rep knock on their door demanding dues and fines long after they purchased the home.</p>
<p dir="ltr">Still think they're a good thing?</p>
<p dir="ltr">Still think they're out to save your property values?</p>
<p dir="ltr">Sorry, but in my opinion an HOA only DEVALUES your home because it restricts your rights.</p>
<p dir="ltr">HOAs are NOT American. They're an Anti-American entity; existing only to steal your rights so that one of your neighbors can illegal exert control over you.</p>
<p dir="ltr">Time to take back America.<br>
Time to dissolve HOAs.<br>
</p>
TemporalBeinghttp://www.blogger.com/profile/06247647473502902350noreply@blogger.com0tag:blogger.com,1999:blog-139540257852707668.post-81817727860556398622013-04-30T11:25:00.000-04:002013-04-30T11:25:18.780-04:00VMware Workstation 8 and Linux Kernel 3.8...So I recently upgraded to Kubuntu 13.04, which also means upgrading to Linux Kernel 3.8. However, as with most Kernel upgrades my VMware install fails to upgrade. Most of the information out there is for VMware Workstation 9 (W9), but I'm running Workstation 8 (W8). Fortunately the fix for W9 is just as valid, but the lines numbers are a little different.<br />
<br />
Here's what you need to do:<br />
<br />
1. Linux changed where the "version.h" header file is for the source. The fix is easy - a simply symlink:<br />
<br />
# ln -s /usr/src/linux-headers-`uname -r`/include/generated/uapi/linux/version.h /usr/src/linux-headers-`uname -r`/include/linux/version.h<br />
<br />
Now that is specifically for Debian derived distros - you distro might put the headers somewhere else. And of course, you might be trying to support a kernel other than your running kernel - so adjust it as necessary.<br />
<br />
This will allow the build tool for VMware's modules to actually run. <br />
<br />2. Workstation's VMCI module fails to build.<br />
<br />
The Workstation 9 patch is available here: http://mafio.host56.com/2013/03/linux-kernel-3-8-vmware-failed-to-build-vmci/<br />
<br />
For Workstation 8, you can go here: http://communities.vmware.com/message/2234875#2234875. It's also below:<br />
<blockquote>
--- vmci-only/linux/driver.c 2013-03-01 02:46:05.000000000 -0500<br />+++ vmci-only.fixed/linux/driver.c 2013-04-30 11:05:25.923550628 -0400<br />@@ -124,7 +124,7 @@<br /> .name = "vmci",<br /> .id_table = vmci_ids,<br /> .probe = vmci_probe_device,<br />- .remove = __devexit_p(vmci_remove_device),<br />+ .remove = vmci_remove_device,<br /> };<br /><br /> #if LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 19)<br />@@ -1741,7 +1741,7 @@<br /> *-----------------------------------------------------------------------------<br /> */<br /><br />-static int __devinit<br />+static int<br /> vmci_probe_device(struct pci_dev *pdev, // IN: vmci PCI device<br /> const struct pci_device_id *id) // IN: matching device ID<br /> {<br />@@ -1969,7 +1969,7 @@<br /> *-----------------------------------------------------------------------------<br /> */<br /><br />-static void __devexit<br />+static void<br /> vmci_remove_device(struct pci_dev* pdev)<br /> {<br /> struct vmci_device *dev = pci_get_drvdata(pdev);</blockquote>
Enjoy! <br />
<script src="http://slashdot.org/slashdot-it.js" type="text/javascript"></script>TemporalBeinghttp://www.blogger.com/profile/06247647473502902350noreply@blogger.com1tag:blogger.com,1999:blog-139540257852707668.post-59231676879655975242013-04-12T18:56:00.001-04:002013-04-12T18:57:18.568-04:00The Post-PC Era<p dir="ltr">Several years ago I started talking about the how the Motorola Atrix and its laptop-dock would change the world as more manufacturers picked up on the concept and integrated it with Android and other devices. Sadly, The laptop-dock for Motorola's many phones that supported it was far too expensive - nearly $500 USD, so it just didnt make sense for people to buy. So why am I writing about this now?</p>
<p dir="ltr">Well, now we have tablets; bigger than the phones I wrote about, but just as functional, if not more so. In fact, they can cut down the price of that laptop-dock by removing the screen - as indeed ASUS has done with its dock for the ASUS Transformer line - or removing the requirement to dock at all, as many have done by simply adding a BlueTooth Mouse and Keyboard, e.g. LogicTech's BlueTooth Mouse for Android, AKA the V470. So that day I wrote about years ago is now coming to pass - I am now writing this from my ASUS Transformer Infinity using its dock-keyboard.</p>
<p dir="ltr">And, as I said then, Microsoft is not doing well in this kind of mobile world. Win8/WinRT is quite the spectacular failure. While historically Microsoft tried to force everything to the Desktop, they have at least tried to do mobile. However, Win8 is a hybrid between the two worlds - a hybrid for a world where there is no hybrid. The two worlds of computing really are vastly different. Each needs to be taken on on its own terms, exploiting its own nature. In the end, it means that Microsoft's strong hold on the end-user computing market is at its end. And as a result of Microsoft's own nature of everything must be Microsoft, it's not a world in which they will survive.</p>
<p dir="ltr">So we all owe a big thanks to Google and Apple for making it happen.</p>
TemporalBeinghttp://www.blogger.com/profile/06247647473502902350noreply@blogger.com0tag:blogger.com,1999:blog-139540257852707668.post-46181221645857515462012-12-28T00:25:00.001-05:002012-12-28T00:25:26.180-05:00Gitorious...Over a year ago I started working on a new version of the Qt Service component. I originally started the work on gitorious.org and setup a project and three git repositories there.<br />
<br />
Well, I am writing this mostly to belay any confusion as to why I removed those repositories, and the answer is rather simple...<br />
<br />
I had not been doing any work on it, and I started doing more recent work using a Qt Playground project. The advantage is that I am better able to accept more help through the Qt Playground project since the CLA is required to work there (AFAIK) - or rather, it is at least easier to verify such things for the Qt community. It also enables the work to be done in the Qt Project's chosen fashion with all the same tools available (though not all necessarily enabled).<br />
<br />
I removed the projects from gitorious namely to keep anyone from getting confused as to where the work is actually taking place. The gitorious projects were: (i) a clone of Qt5 over a year ago that I never did get to compile, and (ii) essentially an import of the Qt Service component with some early thoughts.<br />
<br />
The work at the Qt Playground project is presently in Code Review for the initial push. It's not yet any where near complete but the general architecture should be there.<br />
<br />
So you can find the Qt Playground project here:<br />
<br />
https://codereview.qt-project.org/#q,status:open+project:playground/daemon,n,z<br />
<br />
You'll be able to get the source here:<br />
<br />
ssh://codereview.qt-project.org:29418/playground/daemon.git<br />
<br />
Happy Coding!<br />
<script src="http://slashdot.org/slashdot-it.js" type="text/javascript"></script>TemporalBeinghttp://www.blogger.com/profile/06247647473502902350noreply@blogger.com0tag:blogger.com,1999:blog-139540257852707668.post-3132246211974464402012-12-18T18:00:00.000-05:002012-12-18T18:00:04.959-05:00Rsync is your friendI needed to copy a Linux system recently. In fact, I ended up copying it multiple times as I reformatted the drives a few times. Rsync did the job perfectly:<br />
<br />
# rsync -a /mnt/tmp/mainbackup /mnt/tmp/system<br />
<br />
And in the end, my only issue was getting Grub to boot the drive due to silly BIOS limitations.<br />
<script src="http://slashdot.org/slashdot-it.js" type="text/javascript"></script>TemporalBeinghttp://www.blogger.com/profile/06247647473502902350noreply@blogger.com0tag:blogger.com,1999:blog-139540257852707668.post-88168250229920486252012-08-01T14:11:00.000-04:002012-08-01T14:11:52.017-04:00Pitfalls of C++/CLII'm a C/C++ guy. I've written a lot of software for Windows using VC++ - from VC++ 6 to VC++ 2010 - but always used native code - e.g. no .NET stuff.<br />
<br />
I recently inherited a project at work that had a mix of C# and C++/CLI. The C# stuff is actually pretty minimal - just enough to manage a Windows Service. The C++/CLI makes up the majority of the program, and is also tied with some Native C++ to interface to a driver DLL for some special hardware.<br />
<br />
Long story short, I'm trying to determine the cause of some buffering issues between this Windows application and a Linux-based application. The Linux-based application works with other software that does the same thing just fine; but there's a bug somewhere I'm trying to track down. In the midst of analyzing the Windows application I find some code that essentially does the following:<br />
<br />
<br />
<pre>struct myCppStructure
{
unsigned int field1;
unsigned int field2;
unsigned int dataArray[512];
};
...
struct myCppStructure* data;
...
IntPtr dataPtr(data);
// myNetworkSocket is a NetworkStream cast as a System::IO::Stream^ System::IO::BinaryWriter^
myBinWriter = gcnew BinaryWriter(myNetworkSocket);
__int64 length = sizeof(struct myCppStructure) / sizeof(__int64);
unsigned __int64* ptr = static_cast<unsigned __int64>(dataPtr.toPointer());
for (unsigned int i = 0; i < (length / sizeof(unsigned __int64)); i++)
{
myBinWriter->Write((*ptr)++);
}
// then calculate the remainder of the structure size and send that
</pre>
<br />
What bothers me is that in nearly all other toolkits you don't need the FOR loop to write the data. You could do something like:<br />
<br />
<pre>myBinWriter->Write(data, sizeof(struct myCppStructure));
</pre>
<br />
I've been trying to find an equivalent in C++/CLI, but it seems everything has to go through some other object do it - usually resulting a copy of some sort, which is not permissible where this code is working.<br />
<br />
So it seems that something that is extremely basic is just completely utterly lacking in C++/CLI. Simple things like this make it an useless language.<br />
<br />
If anyone knows the solution, then please link it in the comments. <br />
<script src="http://slashdot.org/slashdot-it.js" type="text/javascript"></script>TemporalBeinghttp://www.blogger.com/profile/06247647473502902350noreply@blogger.com0tag:blogger.com,1999:blog-139540257852707668.post-1550720668353448102012-01-26T15:39:00.002-05:002012-01-26T15:48:23.687-05:00Firefox - session file invalid error...I ran into an issue with Firefox today, and after scouring the web without finding an answer and plowing through the sessionstore.bak file to try to fix it, I finally figured out the issue.<br /><br />In Firefox 9, the Tools->WebDeveloper->Error Console had the following error:<br /><br />session file is invalid type error this_initialState.window[0] is undefined<br /><br />This occurred after FF crashed and I had to kill it - even reboot - and such as it the whole system was bogged down by something. I couldn't find anything on-line about it and started digging in. Firefox would not restore my session - all the tabs and tab groups, etc.<br /><br />I did a comparison with the new sessionstore.js file that it had, and found that there was a little section that was missing from the default, but present in the backup - in bold below:<br /><br /><br /><blockquote>{<br />"windows":<span style="font-weight:bold;">[],"selectedWindow":0,"_closedWindows":</span>[{"tabs":[{"entries":[<br /></blockquote><br /><br />I removed the text, copied the resulting sessionstore file over sessionstore.js, and viola - firefox reloaded everything!<br /><br />Hopefully others that have this issue will find this post and not have to spend a couple hours trying to figure out how to get their data back.<br /><br /><script src="http://slashdot.org/slashdot-it.js" type="text/javascript"></script>TemporalBeinghttp://www.blogger.com/profile/06247647473502902350noreply@blogger.com0tag:blogger.com,1999:blog-139540257852707668.post-27428241149471392412012-01-20T13:36:00.004-05:002012-08-01T14:22:41.396-04:00What the content industry doesn't get about SOPA/PIPALamar Smith recently wrote an <a href="http://www.cnn.com/2012/01/20/opinion/smith-sopa-support/">article for CNN concerning SOPA</a>. However, he doesn't get what is wrong with it, straight from the opening paragraph:<br />
<blockquote class="tr_bq">
<br />
<quote>The growing number of foreign websites that offer counterfeit or stolen goods continues to threaten American technology, products and jobs. Illegal counterfeiting and piracy costs the U.S. economy $100 billion and thousands of jobs every year. Congress cannot stand by and do nothing while some of America's most profitable and productive industries are under attack.</quote></blockquote>
<br />
Okay, foreign counterfeiting and stolen goods are bad things. However, there is nothing showing that they cost jobs or billions of dollars. Many properly done studies show that piracy of this manner usually helps drive business to the original creator.<br />
<br />
For example, someone making a counterfeit handbag won't have the quality of the original. Someone may buy it, but it'll break down, and they'll probably replace it with something from the original - especially if they were in a first-world country (e.g. US, Europe, Japan, and Australia) than to go and get another counterfeit.<br />
<br />
Or, take music. The Grateful Dead and They Might Be Giants have both had a long history of encouraging people to copy and use their works. Both are extremely large bands now with very large and loyal audiences. This has not hurt them at all, but rather it drives their sales - as people hear the music and buy more from it when they discovery they like it.<br />
<br />
Or take movies. The Anime community has a long history of importing works and dubbing (called Fan Dubs) or subbing (called Fan Subs). As a community they also encourage people to buy the licensed work when it is finally imported and subbed or dubbed for the country. This only introduces the works to bigger audiences, finds new audiences, and builds additional customers. Yes, there are some fans that just won't go the legal route, but most will.<br />
<br />
In all cases, that $0.30 lost on one produce to piracy might turn into $5 or $6 down the road in repeat business.<br />
<br />
But let's get back to SOPA/PIPA.<br />
<br />
<blockquote class="tr_bq">
<quote>The Stop Online Piracy Act protects consumers and innovators by targeting foreign websites that traffic in stolen or counterfeit products, everything from movies to medicine to baby food.</quote></blockquote>
<br />
Again, it's a good thing to stop counterfeit products that hurt people - medicine, baby food, etc. But this isn't being targeted at those kinds of things. It's being targeted at copyright infringement, and bypasses Due Process.<br />
<br />
That is, it only takes one entity to show up in court and accuse a site (any site) of infringing their copyrights, and the court would be obliged to grant a takeover of the site. The owner isn't necessarily notified until their customers complaint that site is off-line, unless their DNS Registration provider (e.g. GoDaddy, etc.) notifies them as they move the DNS to pointing elsewhere.<br />
<br />
This flies in the face of the U.S. Constitution which has a <a href="http://en.wikipedia.org/wiki/Due_process#The_U.S._Constitution">Due Process Clause</a> - <a href="http://www.usconstitution.net/consttop_duep.html">mentioned in multiple places</a>: <a href="http://www.usconstitution.net/xconst_Am5.html">5th Amendment</a> and the <a href="http://www.usconstitution.net/xconst_Am14.html">14th amendment</a>.<br />
<br />
The major backers of SOPA/PIPA - namely the Content industry (MPAA, RIAA, TimeWarner, NBC Universal, Disney, etc.) after having to go through Due Process for years and then losing in court for not being able to name the infringers - or having to show why they should be able to get those names and sue them to start with - is tired of Due Process, so they could really care less about it at the moment. (Though they probably will regret that should SOPA/PIPA ever pass.)<br />
<blockquote class="tr_bq">
<br />
<quote>This information does a disservice to consumers, and it is being disseminated by those who have profited from working with illegal websites that steal and sell America's intellectual property.</quote></blockquote>
<br />
There is a lot of dis-information yes; and it is being propagated primarily by the backers of SOPA/PIPA. Those against it are point out its actual results. The recent Internet Blackout day shows exactly what will happen should SOPA/PIPA pass. Google and others have a very good right to fear SOPA/PIPA and not because they profit from it. (BTW, I am speaking out against it and showing the problems with it too, but I do not profit from any infringing activities as Lamar claims I might since I am against it.)<br />
<br />
The reality is that SOPA/PIPA have a very big legal affect that will severely hamper the creativity of the markets, especially on the Internet.<br />
<br />
For instance, I am getting read to start a company. I have a product planned that I am going to make, and I'll have a website. However, if another company complains that I am infringing their copyright - without even showing it, just making an accusation - they could shutdown my start-up's Internet site, and effectively close up shop for the company. All because of an accusation by some entity that doesn't like what my company is providing, and would rather sue and shut me down than innovate themselves.<br />
<br />
<blockquote class="tr_bq">
<quote>The online blackout that occurred this week, which included Wikipedia, was also misleading. Wikipedia has nothing to fear from SOPA. It is ironic that a website dedicated to providing information knowingly offered misinformation about the bill. SOPA will not harm Wikipedia, domestic blogs or social networking sites.</quote></blockquote>
<br />
Wikipedia has everything to fear as all that needs to happen is for someone to upload some content that someone else claims infringes their copyright and ALL of Wikipedia gets shutdown.<br />
So again, Lamar is providing dis-information.<br />
<br />
<blockquote class="tr_bq">
<quote>Hyperbole has been rampant in the debate about SOPA. However, the bill in no way censors the Internet. It only targets activity that is already illegal, and only targets foreign websites that are dedicated to illegal or infringing activity. In fact, it is similar to laws that already govern websites based in the U.S.</quote></blockquote>
<br />
Censorship automatically occurs when you start shutting down websites based on accusations. It would be one thing if the site and its owner had gone through the courts and were found completely guilty. However, the content providers don't like (i) how much effort it takes for them to do that, (ii) their likelihood of success that way, or (iii) the time it takes. However those three things are there to protect the whole of society from the 'mob', to ensure the rights under the law of all involved. SOPA/PIPA are 100% against that - a direct reflection of their supporters who probably had a very big hand in drawing the bill.<br />
<br />
<blockquote class="tr_bq">
<quote>What has not been publicized is the broad support for SOPA. It has been endorsed by a diverse group of organizations, including the National Association of Manufacturers, International Union of Police Associations, the U.S. Conference of Mayors, the National Songwriters Association and the National Center for Victims of Crime. The bill has even united strange bedfellows: the U.S. Chamber of Commerce and the AFL-CIO. It's not every day that you see business and labor on the same side of an issue.</quote></blockquote>
<br />
This just goes to show you that the organizations that support it have a big hand in commerce and not much in the way of protecting citizens. National Songwriters Association is part of the RIAA. RIAA and MPAA also have strong ties to the U.S. Chamber of Commerce and AFL-CIO; several of the companies therein (Time Warner, etc.) also have ties to device manufacturers through their DVD/BlueRay businesses. They also have a strong hand politically generating lots of money into politics. So again, the above are not surprising to see.<br />
<br />
<blockquote class="tr_bq">
<quote>Even the White House has weighed in, endorsing the need for legislation</quote></blockquote>
<br />
And the current administration at the White House has very strong ties to Hollywood. So again, it would only be surprising if they were against SOPA/PIPA. Same for any democratic organization (e.g. Unions) - and unions tend to back each other. So the Actors Guild would probably support things by the Police Union, and vice versa - the whole "I'll scratch your back if you scratch mine" thing (e.g. Old boy's network) as two appear strong than one.<br />
<br />
<blockquote class="tr_bq">
<quote>respect the First Amendment and believe that any legislation passed by Congress must protect and defend our constitutional rights. But illegal and criminal activity is not protected by the First Amendment simply because it takes place online. For example, there is no First Amendment right to view, distribute or download child pornography over the Internet. Like child pornography, the theft of intellectual property is also illegal in the United States.</quote></blockquote>
<br />
The major issue is not the First Amendment. It's the Fifth and Fourteenth Amendments - the right to Due Process.<br />
<br />
<blockquote class="tr_bq">
<quote>The Stop Online Piracy Act works by cutting off the money to foreign illegal sites and making it harder for online criminals to market and distribute illegal products to U.S. consumers. The bill includes provisions that "follow the money" to cut off the main sources of revenue to these sites, and also protects consumers from being directed to foreign illegal websites by search engines. And it provides innovators with a way to bring claims against foreign illegal sites that steal and sell their technology, inventions and products.</quote></blockquote>
<br />
But you wont' just get foreign sites. You'll get backlash that will involve domestic sites as well. And why would they stop with foreign sites? They'll do their best to show that a domestic site has foreign ties and therefore should be shutdown just the same - and they'll probably do it by just showing that the site does business internationally (which ALL Internet sites do by default).<br />
<br />
<blockquote class="tr_bq">
<quote>Unfortunately, some critics simply want to maintain the status quo that harms U.S. companies, consumers and innovators.</quote></blockquote>
<br />
Actually, I don't like the Status Quo either. We started down an ugly road back in 1998 with the DMCA - something that needs to be repealed at least in part.<br />
<br />
I also believe current laws provide us all the tools necessary to combat piracy and counterfeiting without the need for SOPA/PIPA or even the ACTA Treaty that has been worked on in secret.<br />
<br />
<script src="http://slashdot.org/slashdot-it.js" type="text/javascript"></script>TemporalBeinghttp://www.blogger.com/profile/06247647473502902350noreply@blogger.com0tag:blogger.com,1999:blog-139540257852707668.post-87169653214049169252012-01-04T09:19:00.002-05:002012-01-04T09:21:26.400-05:00Iowa Primaries...Well, congratulations to Santorum on doing so well in the Iowa primaries. I hope the momentum continues and that you get the nomination.<br /><br />Please consider one thing - making Newt Gingrich your VP. The two of you would make a great pair for the main race.TemporalBeinghttp://www.blogger.com/profile/06247647473502902350noreply@blogger.com0tag:blogger.com,1999:blog-139540257852707668.post-25798225158475973602011-09-18T21:36:00.002-04:002011-09-18T21:53:24.484-04:00Qt5 - Introducing QDaemonApplicationOkay, so I'm not quite done with QDaemonApplication to the point where it's even testable yet. However, I wanted to at least announce it to get feedback on the structure I am using. Questions are good, and it'll help it be more robust in the end.<br /><br />Also, I'd like to note that I'm not quite use to PIMPL so if there's something I did wrong in that respect, please let me know so I can correct it. I also use a slightly different programming style; however, I tried to keep it similar to what I am finding in other areas of Qt for consistency. Please let me know if there is anything inconsistent in that respect too.<br /><br />So with that said...<br /><br />I originally wrote about the effort in a previous blog post (see <a href="http://clocksmind.blogspot.com/2011/05/calling-contributors.html">Calling Contributors</a> and <a href="http://clocksmind.blogspot.com/2011/06/qt5-major-update-for-qtservice.html">Qt5 & a major update for QtService - QDaemonApplication</a>), and I finally found some time to be able to work on it, still with the goal to get it in intime for Qt5.<br /><br />As a user of Qt5, a programmer would simply use the QDaemonApplication class much like the presently do for the QCoreApplication or QApplication classes. Though it they will also be able to do some more things between instantiating the QDaemonApplication and calling its exec() function - check parameters, etc - potentially even fully by-passing the exec() if they chose (of course, then they won't get a daemonized application, and the main program won't run - but that can be useful in certain scenarios).<br /><br />Behind QDaemonApplication is a series of APIs that provide the functionality. These APIs start off with some very basic Interface classes (QAbstractDaemon*) for the Interface (e.g. command-line, systemd, Win32 SCM, etc.), Communication between the Interface program and the daemonized program, and platform integration (e.g. Win32 SCM). This structure will allow us to easily switch between different components to do different tasks - e.g. Win32 SCM vs LaunchPad vs SysV vs systemd vs upstart - and communicate in different ways - e.g. Win32 SCM, File Pipe, Network Socket, etc.<br /><br />Eventually as we add more, and support more, then the interfaces, etc will be chosen when Qt5 is built, and we'll try to keep sane defaults; however, presently I am simply trying to replicate the same level of functionality that is in the existing Qt4 QtService Add-on component.<br /><br />So, if you're interested in looking at what's there, even though the code documentation is thus far pretty much non-existent, you can see it at <a href="https://qt.gitorious.org/~benjamenmeyer/qt/brm-qt5-service">BRM-Qt5-Service</A> on Gitorious.<br /><br /><script src="http://slashdot.org/slashdot-it.js" type="text/javascript"></script>TemporalBeinghttp://www.blogger.com/profile/06247647473502902350noreply@blogger.com2tag:blogger.com,1999:blog-139540257852707668.post-38159599362027280142011-06-21T13:53:00.007-04:002011-06-23T13:29:58.827-04:00Qt5 & a major update for QtService - QDaemonApplicationIn May I proposed that QtService be integrated natively into Qt5 [<a href="#1">1</a>], and I offered to spearhead that task [<a href="#6">6</a>] and support Windows and Linux (Embedded and X11)[<a href="#6">6</a>]. In June I setup a branch of the Qt5 master repository[<a href="#10">10</a>] for supporting this task (I'll likely need to rebase it), and am working to get up to speed on using git (I have primarily used SVN), so I expect that getting going will be a little slow in that respect. (Sadly, a bit slower than anticipated. I am working on it.)<br /><br />All that said, below is the summary from the Qt5-Feedback mailing list covering naming conversations, etc; and some additional stuff as I have contemplated that. The discussion has been very good thus far, but has made keeping track of via e-mail a little hard – so I am looking for a nice Wiki home for it all. I'll put it there once I find a nice home (probably at the Qt Dev Wikis somewhere) and get a chance to repost it here.<br /><br />In the mean-time, please feel free to leave comments below.<br /><br />Background:<br /><br />QtService is presently an add-on provided by Trolltech/Nokia through the Qt Components system [<a href="#1">1</a>]. However, for a variety of reasons it is desirable by myself and others that it be a native part of Qt [<a href="#1">1</a>,<a href="#3">3</a>] whether as part of Qt Core or a module provided with Qt itself [<a href="#1">1</a>, <a href="#9">9</a>]. In either case, it needs some TLC to bring it up to date as well as some improvements. To start, the existing QtService implementation is a C++ Template-based implementation[<a href="#1">1</a>,<a href="#3">3</a>]; the end result is that this prohibits use of signals/slots internally to the QtService code [<a href="#1">1</a>,<a href="#3">3</a>], prevents the ability to do a scheduled, orderly shutdown [<a href="#1">1</a>], and makes it hard to work with the command-line [<a href="#3">3</a>,<a href="#5">5</a>].<br /><br />It has been proposed to make a new QtService implementation that makes use of C++ Abstract Interface classes instead [<a href="#1">1</a>,<a href="#3">3</a>]. In the process of doing so the ability to derive an interactive service will be removed per encouraging best practices and that it will not work on all platforms [<a href="#1">1</a>,<a href="#3">3</a>]. The new implementation should likely use a different name - e.g. QService, QDaemon, or QDaemonService - to be more consistent with existing names of parallel functionality - e.g. QCoreApplication, QApplication [<a href="#5">5</a>], and should address issues in the command-line [<a href="#3">3</a>,<a href="#5">5</a>], communication between controller and service [<a href="#5">5</a>], and add the ability to do controller shutdowns of the service [<a href="#1">1</a>].<br /><br />Location:<br /><br />I originally called for the work to be integrated into Qt Core.[<a href="#11">11</a>] However, after fleshing out further details we revealed several dependencies on modules – Qt SWF, Qt Network, and others. That is not to say that Qt SFW and this may not end up in the same module, but it will at least be in a separate module. For the time being, I am calling the new module QtService with the intention that Qt SFW be able to share it (more below).<br /><br />Naming:<br /><br />I originally proposed to use the name QDaemonService.[<a href="#11">11</a>] Some thought this was too long and it was proposed to just use QService.[<a href="#14">14</a>] However, it was pointed out the Qt SFW already uses this QService* namespace.[<a href="#20">20</a>,<a href="#22">22</a>] So, we will use the QDaemon* namespace instead to minimize confusion in the API.<br /><br />Interface:<br /><br />QDaemonApplication will be a formal object like QApplication and QCoreApplication, and should set up the application environment in a similar manner. That is, the command-line options provided should be available via calling QCoreApplication::arguments(). It should also have a function to tell the program whether it is the formal service or the controller so that developers can interact in both modes - thus being able to interact with the command-line as necessary.<br /><br />In keeping with the naming conversions mentioned previously [<a href="#5">5</a>,<a href="#11">11</a>] the primary interface class with be QDaemonApplication. Thus the main application will look something like the following:<br /><br /><code><br /> #include <QDeamonService><br /><br /> int main(int argc, char* argv[]) {<br /> QDaemonApplication service(argc,argv);<br /> ...<br /> return service.exec();<br /> }<br /></code><br /><br />Back-End Communications:<br /><br />In the QtService component, the service code used network connections under *nix and the Win32 Service Manager API on Windows for communication, which primarily relies on some IPC and command-line stuff to communicate. I think it is very important that each platform integrate something native to do the communications. To that end, I believe Qt SFW likely provides the best method of providing that functionality, and think we should collaborate between the two to utilize the IPC portion. Windows support will still require using the Win32 Service Manager API at least on the front end, and may in the back-end too so there may be some additional options of that nature. But primarily, I think we can rely on Qt SFW for IPC functionality – to provide integration for IPC, D-Bus, Shared Memory, etc – whatever is best for the platform and do so via a configurability during compile or (even better) at run-time. Otherwise, I fear we may reinvent the wheel that another portion of Qt has already finished – so why do it twice when the functionality is already there? (Yes, I realize Qt SFW was not available in Qt4 so readily. But from what I understand it will be in Qt5.)<br /><br />This functionality will be hidden by the QDaemonApplication class. Developers utilizing these classes should not have to be concerned about the back-end communications – it should just work from their perspective.<br /><br />Front-End Interfaces:<br /><br />In the QtService component, the service code used the command-line as the sole front-end interface. However, QDaemon* should integrate with various systems, as well as keeping that simplistic command-line interface. So, while the command-line interface will be the first-out-of-the-gate supported front end, we should also add configuration support (likely build-time only) for supporting other mechanisms – e.g. systemd, zeroconf, upstart, bonjour, launchpad, etc. Once we have moved beyond supporting solely the command-line then native mechanisms will be set to be the defaults for each platform when such functionality is available (e.g. launchpad on Mac). In the case where there may be several different mechanisms (e.g. Linux – systemd, zeroconf, command-line, D-Bus, etc.) then we may select an appropriate default – e.g. D-Bus or command-line for Linux.<br /><br />The primary idea here is that since there are so many different front-ends for driving service/daemon applications, it should be configurable with appropriate defaults selected. It should be as easy as possible to enable integration of new front-end interfaces for future expansion.<br /><br />This functionality will be hidden by the QDaemonApplication class.<br /><br />Developer Interaction:<br /><br />As the application will daemonize itself in the QDaemonApplication object, developers will need to have an interface object with that class. To this extent, Developers will be required to create a class derived from an abstract interface class – QAbstractDaemonObject – which is then registered in some manner (function/signal/slot) with the primary QDaemonApplication object.<br /><br />Class Architecture:<br /><br />The QDaemon* namespace will consist of two public classes:<br /><br />QDaemonApplication<br />QAbstractDaemonObject<br /><br />And a number of internal classes to provide the various mechanisms for setting up the environment, interacting with the front-end APIs, etc.<br /><br />The primary purpose of QAbstractDaemonObject will be to provide sufficient interfaces for developers to utilize both pre-demonization and post-daemonization. The QDaemonApplication object will do most of the work in bringing up the application, however, it will not daemonize the application until the exec() function is called – thus providing the developer time to interact with the pre-daemonized process. By allowing the developer to derive from this interface, we can also provide sufficient means to enable communications for the developer between the pre-daemonized and post-daemonized process – for custom communications (likely serializing to and deserializing from a QByteArray) via standardized signals/slots.<br /><br />Instances:<br /><br />Some platforms (e.g. Windows) only allow a single instance (primarily determined by the installation location and name of the service as registered with the Win32 Service Manager API) of a service to operate at a time. Other platforms could care less. To this degree, QDaemonApplication should contain the ability to differentiate between platforms and inform the developer if it is allowed, and if it is provide the developer with an easy means (boolean option) on whether to allow it or not.[<a href="#14">14</a>]<br /><br />I think that's enough to get some discussion going.<br /><br /><a name="1">[1] http://lists.qt.nokia.com/pipermail/qt5-feedback/2011-May/000246.html</a><br /><a name="2">[2] http://lists.qt.nokia.com/pipermail/qt5-feedback/2011-May/000247.html</a><br /><a name="3">[3] http://lists.qt.nokia.com/pipermail/qt5-feedback/2011-May/000253.html</a><br /><a name="4">[4] http://lists.qt.nokia.com/pipermail/qt5-feedback/2011-May/000256.html</a><br /><a name="5">[5] http://lists.qt.nokia.com/pipermail/qt5-feedback/2011-May/000259.html</a><br /><a name="6">[6] http://lists.qt.nokia.com/pipermail/qt5-feedback/2011-May/000262.html</a><br /><a name="7">[7] http://lists.qt.nokia.com/pipermail/qt5-feedback/2011-May/000266.html</a><br /><a name="8">[8] http://lists.qt.nokia.com/pipermail/qt5-feedback/2011-May/000267.html</a><br /><a name="9">[9] http://lists.qt.nokia.com/pipermail/qt5-feedback/2011-May/000264.html</a><br /><a name="10">[10] https://gitorious.org/~benjamenmeyer/qt/brm-qt5-service</a><br /><a name="11">[11] http://lists.qt.nokia.com/pipermail/qt5-feedback/2011-June/000449.html</a><br /><a name="12">[12] http://lists.qt.nokia.com/pipermail/qt5-feedback/2011-June/000450.html</a><br /><a name="13">[13] http://lists.qt.nokia.com/pipermail/qt5-feedback/2011-June/000453.html</a><br /><a name="14">[14] http://lists.qt.nokia.com/pipermail/qt5-feedback/2011-June/000454.html</a><br /><a name="15">[15] http://lists.qt.nokia.com/pipermail/qt5-feedback/2011-June/000455.html</a><br /><a name="16">[16] http://lists.qt.nokia.com/pipermail/qt5-feedback/2011-June/000457.html</a><br /><a name="17">[17] http://lists.qt.nokia.com/pipermail/qt5-feedback/2011-June/000459.html</a><br /><a name="18">[18] http://lists.qt.nokia.com/pipermail/qt5-feedback/2011-June/000460.html</a><br /><a name="19">[19] http://lists.qt.nokia.com/pipermail/qt5-feedback/2011-June/000461.html</a><br /><a name="20">[20] http://lists.qt.nokia.com/pipermail/qt5-feedback/2011-June/000463.html</a><br /><a name="21">[21] http://lists.qt.nokia.com/pipermail/qt5-feedback/2011-June/000478.html</a><br /><a name="22">[22] http://lists.qt.nokia.com/pipermail/qt5-feedback/2011-June/000500.html</a><br /><a name="23">[23] http://lists.qt.nokia.com/pipermail/qt5-feedback/2011-June/000525.html</a><br /><br /><script src="http://slashdot.org/slashdot-it.js" type="text/javascript"></script>TemporalBeinghttp://www.blogger.com/profile/06247647473502902350noreply@blogger.com5tag:blogger.com,1999:blog-139540257852707668.post-13824875559876054012011-05-20T19:53:00.003-04:002011-05-20T20:17:11.016-04:00Calling contributors...Recently Nokia announced the initial planning stages for Qt5, and looked to the community for ideas on how to improve Qt in a generally source-compatible way, meanwhile allowing extensions to be added and some things to be modified. All this, via the a mailing list - qt5 dash feedback at qt dot nokia dot com. There have been a number of great ideas that have come up - from additions to the QDateTime, time-zone support, enhancing the printing, integrating more from KDE, and lots more.<br /><br />So why this blog? Well, I've been working with Qt professionally for just over 2 years now making a distributed network-based geometry measurement system for the railroad industry. The design uses a lot of service applications - which, use the QtService component add-on. Well, I'd very much like to see the QtService component become part of the Qt Core library in Qt5, but it needs a bit of love to get there.<br /><br />Don't get me wrong - QtService is great. It works wonderfully, <span style="font-weight:bold;">but</span> it hasn't been updated in quite a while, and doesn't officially support Qt 4.6 or 4.7. Looking at <a href="https://gitorious.org/qt-solutions/qt-solutions">Gitoroius</a>, it's been put into the archives - i.e. its no longer going to <span style="font-style:italic;">be</span> officially supported. Yet, I use this every day and I'm sure others do as well.<br /><br />There are also a few other relatively minor problems with QtService:<br /><br />1. It is template based. And this means that the base application from which you derive your own application can't use Qt's signals/slots to deliver the basic functionality. This is of greatest hindrance in the communications between the 'controller' portion of the application (which is provided for you) and your application. It also makes it very difficult to do things like delaying a service stop request (e.g. so you can unregister the application from a central server).<br /><br />2. The command line is rather limited. That is, you get what they provide you and its very difficult (actually nearly impossible) to extend it to do other things - especially if those things are passing a parameter to your application (since there is NO signal you can send to it).<br /><br />So, here I am now looking ahead to Qt5 and seeing that this nice component is not going to be supported. Meaning, I'm going to have to support it myself - and so is anyone else that wants to use it too, and I'm sure there are others out there.<br /><br />Of course, since it is not a first-class citizen to the Qt Framework - and you have to explicitly pull it in and install it - then I'm sure that not everyone that really could make use of it does. So there are probably a lot more people out there that could make use of it and aren't simply because its too much work to install and use it, and you get those icky limitations that aren't very friendly to you either.<br /><br />Fortunately, Qt is open source. And Nokia is moving Qt to open governance, especially with Qt5. This means that I, and everyone else, have the ability to contribute to Qt5 like never before. It also means if we can get a suitable new replacement for QtService written and on par with other parts of Qt then we stand a chance of having it become <span style="font-style:italic;">part</span> of Qt itself - a first class citizen.<br /><br />Well, time to wrap up my thoughts for this post...essentially, I have now joined gitorious (https://gitorious.org/~benjamenmeyer) and will be making a branch in the next week or so to star this work on. (Very exciting). Yes, I plan to "put my money where my mouth is" or so the old saying goes. I doubt my employer will let me do it on company time, but it'll be worth it if only so I don't have to maintain the other version in a lot less friendly and open manner. (Of course, that also means pushing my employer to use Qt5 when the time comes, which is quite a bit easier to do.)<br /><br />So once I get back home, then I'll be finishing the setup of my gitorious account, and creating a branch, and possibly a team, for this effort. I very much do look forward to learning git in the process (I've been a staunch SVN user for years, but I mostly do work where centralized versioning makes sense; and community projects like this make better sense with a distributed versioning system.)<br /><br />So, anyone else out there that is using QtService component, or would like to join in - please join us on the Qt5 Feedback mailing list mentioned above, and we'll get you plugged into the new work once I get it all setup.<br /><br />And certainly look for more here as this endeavor continues. I'll certainly try to post more as it comes together.<br /><br /><script src="http://slashdot.org/slashdot-it.js" type="text/javascript"></script>TemporalBeinghttp://www.blogger.com/profile/06247647473502902350noreply@blogger.com0tag:blogger.com,1999:blog-139540257852707668.post-5994282653510950602011-02-10T23:08:00.003-05:002011-02-10T23:41:47.730-05:00Portable software...Over the last year and a half I've had the opportunity to work with <a href="http://qt.nokia.com">Qt</a> in software development, and I've enjoyed it greatly.<br /><br />In a previous job, I looked at what it would take to port our software from being just on Microsoft Windows to also running on Linux, Mac, and various UNIX operating systems. In the course of that research I looked at several options: (i) Qt, (ii) <a href="http://www.gtk.org">Gtk</a>, and (iii) <a href="http://www.wxwidgets.org">WxWidget</a>. The problem I ran into then was (a) Qt simply cost too much for the project (due to our budget), and was a security concern - not from the bug aspect but from the aspect that the target market was military and getting Qt certified for the environment would be hard - and a failure there could keep the project from certification as well. WxWidgets and Gtk both has the same security concerns; however, Gtk also had a licensing concern given its LGPL nature and our software was proprietary - Qt provided a great way around that if we could afford it, but we couldn't. So alas I was unable to get into Qt at that time, but as a result I was able to write a good portion of code for doing the same thing - writing my own platform abstraction API.<br /><br />Now, of the three I loved the ideas and concepts introduced by Qt the best. Gtk, at least at that point, was still very much Message Mapping based from what I could tell. MFC was more than enough of that for me, and was just a pain - everything was determined at compile time and there was little flexibility. One of the great concepts that sold me on Qt was the Signals & Slots system that replaced the message mapping. WxWidgets supported both. Otherwise, all three seemed to be fairly equivalent at that time.<br /><br />Now, before I go on let me state that I am not trying to convert anyone from Gtk or WxWidgets to Qt with this post. However, if you are using .NET or MFC or anything else (especially on Windows) then you need to start looking elsewhere for numerous reasons which I'll save for another post.<br /><br />I had written applications in MFC and Win32 for a number of years - both GUI and services. They met the need at the time they were created but are no longer sufficient. .NET, on the other hand, does seem to be vastly updated by comparison but still has quite a few issues - at least patent wise if you want to have portable software.<br /><br />Now, portable, multi-platform software is going to become ever increasingly important, and unless you are doing certain things that are very tied to a specific environment (e.g. extensions to Windows Explorer or KDE Plasma) then you can reach all your customers on all their platforms with a single code-base using the right tools. WxWidgets, Gtk, and Qt are some of these tools - and probably the best and most portable of all available. They are also Open Source. WxWidgets is completely public domain; while Gtk is solely LGPL. Qt, however, has several licenses - GPL, LGPL, and commercial licenses to choose from, so from a business perspective it makes the best sense - at least, as long as they continue the commercial license program; while from an Open Source perspective all three are really about equal in choice.<br /><br />All that said, I must certainly say that the creators of Qt have done an excellent job and really gotten the platform right - one that is also continuously improving as well. Perfect just gets better.<br /><br />So, now why do I say that?<br /><br />Well, Qt is split into several modules, and they're working on making it more modular yet. Of course, you have to use the core (Qt-core) to use any of it, but the rest is pretty much optional - everything from networking to XML to services (daemons), and more - even in-program scripting. Additionally they also made it very easy to convert from Qt to Standard C++ and back - most all of their classes have functions to convert back and forth where overlapping occurs. The layers make sense and work; consistency abounds throughout the APIs.<br /><br />So now with a single API you can reach from Linux, to Mac, to UNIX, to Windows; from Desktop to Server to embedded devices (tablets, netbooks, cell phones, specialty devices, etc.). There's not much you'll need a specialized, platform dependent code-base for - which basically comes down to Kernel-land software, and integration environments whose requirements prohibit being able to choose what API set you want to use (e.g. Windows Explorer ala TortoiseSVN). And you get all of it at native performance and looks, with bindings to most languages (e.g. Python, Java, Perl).<br /><br />Businesses can certainly save themselves a lot of money by using APIs such as Qt, as well as preserve their businesses should anything happen to Microsoft or MS Windows - it's quite a gamble to put all your eggs in one basket, but yet so many software development houses do.<br /><br /><script src="http://slashdot.org/slashdot-it.js" type="text/javascript"></script>TemporalBeinghttp://www.blogger.com/profile/06247647473502902350noreply@blogger.com0tag:blogger.com,1999:blog-139540257852707668.post-12148793592242066912011-02-10T22:48:00.003-05:002011-02-10T23:07:32.523-05:00Fixing Health CareDespite what the Health Care Industry, Congress, and President Obama would like you to think there really is a simple way to fix health care.<br /><br />President Obama and the Democrats in Congress want you to believe that all the changes in their recently passed, much despised, and soon to be at least partially repealed bill is required to fix health care. However, it really does nothing for you - and it only makes the debts higher, extending entitlements where none are needed.<br /><br />The Republicans don't have much to offer, but are at least doing right by trying to remove the bill that no one really ever wanted to start with - well, except the Democrats in Congress and Obama since it made them look like they were doing something when they really weren't.<br /><br />So what's the correct fix?<br /><ol><br /> <li>Insurance companies will be required to accept all properly licensed doctors. E.g. eliminate the whole "out-of-network" thing; it's really just a mess that is completely unnecessary.</li><br /> <li>Insurance companies will be required to pay what the doctors charge. They must not be allowed to pay out only a portion of the payment, and no refusals to pay.</li><br /> <li>Doctors will be required to charge only what is necessary - they may not inflate what is charged to the Insurance companies or to individuals.</li><br /></ol><br />Insurance companies need to remember what their business is - betting that people will not need the benefits they pay for. However, when people need those benefits they also need to pay out. Doctors are licensed by the American Medical Association, and as such need to be allowed to make the final call, possibly having a second opinion as well, but the AMA should lay out the rules. If people opt out of having insurance they they should have to pay the full amount themselves; but the insurance industry to not and should not depend on 100% participation to work. Simply put, people that are opting in are betting they will need while those opting out are betting they will not.<br /><br />Doctors ought to be able to charge what they need to. If necessary the AMA or a Federal program can provide oversight to charges - to ensure they stay within reason (e.g. costs plus a small percentage of profit). But what is charged must be payed out in full without the doctors having to appeal or inflate prices just to get what they need to stay in business, and people shouldn't have to chose a doctor based on their insurance but based on the quality of care and services provided by the doctor. As a result, people not using insurance will be charged the same as the insurance companies - and doctor's would have no reason to discount it for them as a result.<br /><br />So now we've solved the same program, far more effectively, and without intruding on State or Personal rights as granted by the U.S. Constitution.<br /><br />All we have done, however, is forced Congress to break its ties with the Insurance industry, their associated PACs, etc. and actually represent the people.TemporalBeinghttp://www.blogger.com/profile/06247647473502902350noreply@blogger.com0tag:blogger.com,1999:blog-139540257852707668.post-18842061483564557582011-01-24T14:03:00.004-05:002011-02-10T22:48:46.470-05:00The End of US Entitlements...It's time to put all US Federal Entitlement programs on the chopping block - and schedule their removal. The question is how?<br /><br />To start with, Social Security has an easy, but long term, method to remove it. Take an age - say 30 (I fall in this group) or 15 or whatever - and say "You will never receive Social Security benefits". Hold to it (e.g. the age goes up every year following the group), and then close down the program as fewer people receive it. <span style="font-style:italic;">But </span>also require that once it is shut down that all remaining funds are directly applied to the Federal Deficit if it still exists (highly likely) or should it no longer exist get paid back to everyone in the nation via a tax refund that is equally given out to everyone. (Of course you could also be generous and say just the those below a certain income level too.)<br /><br />Note: If you're wondering why I chose those methods of paying it back to the people it is because it would be very hard to ensure that everyone gets paid back what they put in. So an equitable solution must be found. Of course, it also assumes there will still be money left in the Social Security coffers, which there may not be.<br /><br />So there's Social Security - gone. One program down.<br /><br />But what to do with everything else? What about Medicaid? Medicare? Welfare? All these programs are highly problematic to start with in how they are <span style="font-style:italic;">currently</span> run. So let's just go ahead and shut them all down <span style="font-style:italic;">now</span>, but we'll replace them with a single program designed to do what is at the core of those programs - helping out the needy and the poor. We'll call this program "PrimeCare". That said, here's what PrimeCare will do:<br /><br />(i) if you're out of work, it'll help you get a job.<br />(ii) if you can't afford food, it'll help you get enough food every day - but you won't be able to go grocery shopping for it. You'll have to go to a PrimeCare facility. Transportation, if needed, will be provided as well; or in the alternative the food and supplies will be delivered to you.<br />(iii) if you need medical support, then PrimeCare will provide several insurance equivalent options.<br />(iv) if you're having trouble paying your bills, then PrimeCare will help you through debt management and bankruptcy if necessary, possibly even temporary low interest loans if necessary.<br /><br />What's the purpose? To provide for the needs of those that can't otherwise afford it.<br /><br />What do you need to qualify?<br /><br />Well, mostly you'll have to be poor and unable to provide for yourself and your family. You'll also be required to give up spending on certain things, like CableTV, etc - things that you do not need to survive. You'll also be required to file the normal tax documents.<br /><br />What's the goal?<br /><br />To help them get to the point where they can pay their own way, and to provide for those that simply cannot (e.g. elderly, severely handicapped, etc.).<br /><br />How do we pay for it?<br /><br />If we were to take a lesson from the Bible, then we'd take an easy 10% out of everyone's paycheck. However, you would actually end up paying less in taxes than you do now - where you take out 7% for unemployment, 2-4% for medicare/medicaide, 7% for Social Security, and more. All those things go away and instead it all gets replaced by a solid 10% for everyone. If you want to make it progressive, then it could be:<br />(a) 0% up to poverty level (defined solely be the Bureau of Labor and Statistics) <br />(b) 0.05*personal income*M where 'm' ranges of whole numbers from 1 to 4 and is determined by a simple table of income levels (defined by IRS) in:<br />(b.1) '1' is up to the national average income of the previous year<br />(b.2) '2' is up to the twice the national average income of the previous year<br />(b.2) '3' is up to the four the national average income of the previous year<br />(b.2) '4' is for everyone else<br /><br />That, of course, means removing the entitlements and reinventing them as aid for those that need it, providing a way and incentive to get out of the incentives. It also means that we must be willing to let those who are not willing to participate in the program, or to continuously do what is necessary to continue receiving aid, to exit the program without receiving aid even if they are unable to provide for themselves otherwise. Why are such incentives and requirements necessary? To help deter people from abusing the programs like our current programs are.<br /><br />Now for the Medicaid, Medicare replacement to really work we also have to change the whole health insurance industry - but this post is already enough, so I'll write about that another time.TemporalBeinghttp://www.blogger.com/profile/06247647473502902350noreply@blogger.com0tag:blogger.com,1999:blog-139540257852707668.post-53667563707708766552010-02-15T18:42:00.003-05:002010-02-15T18:52:40.617-05:00What to do about rising debt...As <a href="http://finance.yahoo.com/news/US-debt-will-keep-growing-apf-219502322.html?x=0">people are starting to worry about the US National Debt</a>, what can be done?<br /><br />Well, to start with:<br /><br />1. Eliminate Medicare and Medicaide. If they must be kept, outsource to the lowest bidder with a government agency remaining the benefits approver. That is - combine them into one program, let a single agency decide what is approved/disapproved, but let the free market decide the cost. Stop spending $19 on an aspirin!<br /><br />2. Eliminate Social Security.<br /><br />Seriously. Yes, there are some people counting on it. But no one should. I certainly don't. Set an age where it is deemed that people have enough time to save up for their own retirement, probably around 40. Eliminate Social Security benefits for anyone under that age. (Yes, I'd be likely be under that age.) I don't count on getting the money back that I put in - don't give it back to me either. Just keep it and end the program. That's my contribution.<br /><br />3. Start trimming<br /><br />Get out of the business of managing by head count. Reduce the red-tape, eliminate the bureaucracy that has built up. Actually work to make government more efficient.<br /><br />Yes, some parts of government need to grow. Others are too big and need to shrink. Yet others need to simply go away entirely (Social Security!).<br /><br />4. Start paying down that debt without taking more on.<br /><br />Balance the budget. Don't spend more then you bring in.<br /><br />I'd say raise taxes a little to help pay it off - and the only thing that tax raise would be allowed for would be to pay it off - but that wouldn't help. Congress (and the Democrats especially) will just spend whatever they can.<br /><br />There's probably more that can be done, but it has to start with getting rid of a lot of the social programs that are just plain utter crap, e.g. Social Security. Replace them with programs that teach people how to do it for themselves, and setup a smaller program for those that really can't - e.g. those on disability that can't work - and make it hard to get into, e.g. several doctors must sign off, reviewed every couple years, etc.<br /><br />It'll do us all good.<br /><br />But most importantly STOP THE SPENDING!<br /><br /><br /><script src="http://slashdot.org/slashdot-it.js" type="text/javascript"></script>TemporalBeinghttp://www.blogger.com/profile/06247647473502902350noreply@blogger.com0tag:blogger.com,1999:blog-139540257852707668.post-15258244751072973532009-05-18T17:41:00.003-04:002009-05-18T18:13:25.535-04:00How we can overcome deflation...Thinking about some of my past posts - calling out for allowing deflation to occur - it occurred to me that I might have missed one aspect - loan contracts. So I did some more thinking and here's what I came up with...<br /><br />Nearly ever economist will tell you that deflation is bad. Yet (as much as they want to ignore it) it is a part of a healthy economy. However, our economy structures loans in a bad way - one that always assumes inflation. This is most evident both in the Great Depression between 1929 and 1944 as well as in our present economic climate where the housing market literally forces people to give up their homes because the market value is lower than the mortgage and they either (a) have to sell for some reason in the immediate or near term, or (b) they simply cannot afford to pay the mortgage any longer due to financial troubles.<br /><br />So the question becomes - how can we allow for deflation while at the same time not undermining our current system?<br /><br />The solution may be simpler than one thinks - allow for deflation in the loan contracts through some relatively simple clauses:<br /><br />i. All the contracts have interest rates. Apply the growth in the interest during inflationary periods.<br />ii. When in a deflationary period, the interest rate drops sufficiently to account for the difference in value due the deflation.<br />iii. Any lost inflation per #ii is not counted against the borrower by the lender.<br /><br />Basically - if inflation is 5%, then the loan contract's interest rate applies. However, if deflation kicks in, then the 5% interest rate might either change or go away entirely. (While one might like it to drop below zero, it would probably be hard to get buy in from lenders if it did, unless it was a dramatic deflation. And by dramatic I mean something like 15% or greater deflation, not simply 1-2%.)<br /><br />Now why does this work? Valuation of the currency. The lender is still receiving more value back than what they paid out. For example, if a lender lent out $100 at a 5% interest rate, that would net them $105 if paid. If deflation kicks in at %5, then the $100 is only worth $105.26 ( 100*100/95) just because of the deflation. If the borrower paid back the $100 without any interest, they would have still made their %5 back due to an increased value in the currency. However, if they continue to charge the 5%, then they would receive $110.53 (105*100/95) - e.g. 11%, thus making the loan unaffordable to the borrower as it ends up charging 6% more than it should have.<br /><br />Some will say "well tough luck you took a gamble with the loan - that's life". True, you did take a gamble but so did the whole financial institution, based on a flawed assumption - that deflation will never exist. Why not involve deflation in the assumption - that it will exist because it does in real life - and adjust the gamble based on that?<br /><br />This little change - of prorating the interest rates for deflation during the life of the loan - will allow deflation to occur in a safe and harmless manner for loan providers and borrowers.<br /><br />Ultimately this benefits both lenders and borrowers. For lenders, it will mean less people having to walk away from a loan when deflation occurs. For borrowers, it means having a better financial stability to continue paying the bills in a deflationary period.<br /><br />Lenders can start by including language for this in new loan contracts. Borrowers can start by pushing for this kind of language in new loan contracts. For both, it means less time spent in bankruptcy courts during those deflationary periods. And either Lenders could extend this to existing contracts, or the gov't could mandate it for all existing contracts.<br /><br />Aside: My guess is that this would only really need to apply to large loans (e.g. card, houses, commercial, etc.) that are required for the economy to continue. Small loans (e.g. credit card, etc.) should probably be able to do without this, though likely would get it too just to make things fair overall.<br /><br /><script src="http://slashdot.org/slashdot-it.js" type="text/javascript"></script>TemporalBeinghttp://www.blogger.com/profile/06247647473502902350noreply@blogger.com0tag:blogger.com,1999:blog-139540257852707668.post-3177340476087237522009-05-13T19:38:00.003-04:002009-05-13T19:46:27.061-04:00Main driver in this recession?I don't know why, but for some reason people seem to think that the main driver in this recession is housing (http://finance.yahoo.com/news/Stocks-fall-on-weak-retail-apf-15237010.html):<br /><br /><blockquote>Meanwhile, the main driver of the recession -- the collapsing housing market -- has yet to turn around. RealtyTrac data said April's foreclosures were up 32 percent from a year ago, and up slightly from March. It was the second straight month that more than 340,000 U.S. households received a foreclosure filing.</blockquote><br /><br />The collapse of the housing market is really only one of the symptoms of the driver of this recession. What is the real main driver of this recession? DEBT.<br /><br />How do we know that DEBT is the main driver? Because as credit tightens, one of the main factors is the debt-to-income ratio. If your income is not high enough in proportion to your debt (i.e. you have a high debt to low income), then the loan is denied. If, on the other hand, you have a low debt to high income then you are a safe bet for a loan, and they'll do whatever it takes to get you a loan. (There's a few other factors to, but that's a primary one.)<br /><br />What can we do to stop the main driver? Start paying down the debt.<br /><br />Seriously.<br /><br />South Carolina's Governor Sanford has it right - pay down debt.<br /><br />And Obama's continuing plan to try to spend our way out of this is only going to make it worse - far worse - as we'll have to take on yet more debt (as a nation) to pay back the interest on the existing debt. Instead of trying to push money out every where else, Obama, the Fed, the Treasury, and Congress should be looking at what they can do to pay down the Federal debt. Until they do, we're in for an eventual collapse - we might (and I stress might) get away this time, but you can't run from it forever, as many are now finding out in their personal and work lives.<br /><br /><script src="http://slashdot.org/slashdot-it.js" type="text/javascript"></script>TemporalBeinghttp://www.blogger.com/profile/06247647473502902350noreply@blogger.com0tag:blogger.com,1999:blog-139540257852707668.post-40913650517890198562009-02-09T20:45:00.002-05:002009-02-09T21:02:31.188-05:00Obama needs to get off the air...What's really funny right now, is that Obama is tripping over his words as he speaks to the American public. It's really embarrassing too. In answering questions, he's basically going back to only a couple points (e.g. can't just do tax cuts), and tripping over most everything else. What's more - it's very funny to watch him as he back-pedals over his stances on Iraq and Afganistan as he has to match up with reality.<br /><br />That said, it's also very evident from the conference that he has no clue what caused the financial problem. He seems to be putting the blame squarely on the banks, though he did at least recognize the overspending of the American public - probably only because it was an answer directly related to the overspending of the American public.<br /><br />True, the banks played a large role in the financial problem. But they also didn't cause people to overspend. They didn't cause people to put money no credit cards for stuff when they didn't have money to pay for it. They didn't cause people to need pay-day-loans to make rent. They also didn't cause businesses to go to pure JIT (just-in-time) manfacturing and move away from inventories to getting it as close to when the buyer buys and minimizing any overhead.<br /><br />So what does this mean?<br /><br />Well, since businesses are using a <span style="font-weight:bold;">lot</span> of JIT that their cuts due to loss of demand are more immediately felt across the various sectors. It also means that their increases will be more immediately felt across those same sectors when the time comes. It's really a double edged sword.<br /><br />However, the bigger issue is that we are use to spending more than a dollar for every dollar we bring in. Businesses got use to it, and now that is no longer happening. Businesses and the world need to adjust. And we're not going to spend our way out of it.<br /><br />In order to get out of this, then we have to create new, steady, long-term sources of jobs. New companies that will turn into long term companies. We need to get Wall Street to stop looking at only the 1 year, 2 year, or 5 year plans; and look at the 20, 30, 40, 50 year plans.<br /><br />Tightening up capital will help. It will also help to loosen that capital where it needs to go - start-ups and SMB's. In other words, the only company's that should qualify for the major capital going out should be the ones that have less than 500 or 1000 employees, and (preferably) within their first 10 years of business. All others should be on a secondary or tertiary list to get what's left over. Why? Those are the majority of the companies that will <span style="font-weight:bold;">create</span> new jobs and spend money like no one else. They mostly have nothing to lose, and they are always the ones to drive us through booms. But we also need to do it in a way so as to prevent a bust when the money closes down.TemporalBeinghttp://www.blogger.com/profile/06247647473502902350noreply@blogger.com0