Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License.
As a putative statement of fact, the second part is false. Noncompliant use is sometimes ruled out by copyright law, but is only categorically ruled out if you have done something, like sign a contract, to relinquish your rights. Using or copying a file does not in itself constitute signing a contract. There are ways you can legally use the file that are not in compliance with the license, such as:
- If an exception to copyright restrictions applies, such as fair use
- If the period of copyright protection has ended
- If you obtain a different license from the rightsholder that grants rights beyond what the Apache License grants (dual licensing)
A license is not a contract. It can only grant an exception to a background prohibition (think hunting license, drivers license); it can’t have the effect of establishing a prohibition that wasn’t there already. Where there is no prohibition, as in the case of fair use, a license has nothing to say. Unconditional “may not” language is not appropriate in contexts like the above.
Granted that this sentence is not in the license text itself, but rather in the boilerplate. But it is misleading nonetheless.
Frequently I see people confusing license and contract. The confusion is natural and I didn’t get it until I worked at Creative Commons. One source of confusion is that the two are often linked. When entering into a contract, you might agree to do something like pay money or give up rights, in exchange for which you might be granted a license. Libraries, for example, sometimes give up rights like text mining (which is not restricted by copyright law) in exchange for access to journals. But the relinquishing of rights is a term of the contract, not the license. And you’re only bound by a contract (the contract only exists) if you agree to it.
There is another problem with the Apache statement, which is that copyright law only restricts copying (performance, translation, etc.), not all “use”. It doesn’t make sense to license use when use isn’t prohibited.
Another brain dump, continuing my ongoing effort to (a) make sense of Brian Cantwell Smith (b) be of some use to OBO, which in my opinion is in a crisis regarding information
(see IAO_0000136). Don’t take this too seriously – I’m a dilettante when it comes to philosophy. Take this as a homework exercise for someone who’s trying to figure it out. Please provide corrections.
Alan Ruttenberg once gave me the following definition of ‘about’: a sentence (or other utterance) X is about an object Y iff a [syntactic] part of X refers to Y.
At a gut level I don’t like this at all, but the following is the best alternative I’ve come with:
A proposition X is about an object Y if the truth of X depends on the state of Y.
This seems better because it is semantic instead of syntactic. It doesn’t depend on how the proposition is expressed / coded, or on any understanding of reference, which is almost as mysterious as aboutness.
My alternative relies on an understanding of ‘depends on’. To nail this you have to rule out any changes to the truth of X caused by factors other than changes to the state of Y. [Added: That's badly said, what I mean is that to prove the change to Y is responsible for the change in the truth of X, one would want to come up with a situation where there's nothing else to attribute it to. See next sentence.] That would lead to the following: The truth of X depends on the state of Y, if there are two possible world states w1 and w2 such that w1 and w2 differ only in the state of Y, but in which X has opposite truth values.
The above is independent of your choice of modal logic and world states, which could just be temporal (BFO is effectively a temporal theory).
(Maybe there are other ways than this to depend, but I don’t want to get distracted by causation.)
Both definitions of ‘about’ depend on ‘object’ (which I take to be akin to BFO ‘continuant’). I take an object to be a part of the world, so a state of an object is part of a state of the world, and the state space of the world (the space of possible world states) is some kind of product of the object’s state space and the state space of states of everything that’s not that object.
And all this relies on some understanding of the integrity or identity or continuity of an object, such that if you pick out an object in world state w1 and then try to pick out the same object in world state w2, you’ll have some way to decide whether you’ve done so correctly.
I have been reluctant to grant legitimacy to ‘objects’ (or ‘continuants’) – I’ve been wondering whether they are primarily syntactic or logical or social constructs, as opposed to something with some objective clout. Maybe the question here is: if you have two candidate identity criteria for an object that coincide in some world states but not in others, is there some principled way to choose between them? Maybe: the parts of an object are more closely coupled to one another, both spatially and temporally, than they are to parts of things that aren’t that object. This is a bit mushy but seems to have potential.
In this formulation entities (such as mathematical ones) aren’t objects unless they can be said to have variable state. Does the state of the number 7 change through time? Is 7 an object? Hard to say, but I think it would be a pretty unnatural world view that would say it does / is. (But you can have a book about the number pi… hmm… maybe this particular ‘about’ is a term of art.)
There is also the question of what a proposition is, but I don’t see that as hard; a proposition is a 0-ary predicate, which in nonmodal logic is true or false depending on your choice of model / interpretation, and in modal logic is true or false depending on which world state you’re in. I.e. a proposition is a predicate over, or set of, world states, like an ‘event’ in probability theory.
How propositions fit into BFO, I’m not sure. In some ways they resemble universals, while in others they resemble particulars (maybe they’re so-called ‘qualities’ of the world).
[Added: I admit the above account only handles particular kinds of propositions. To be complete I ought to provide 'aboutness' accounts of propositions about the past and future, whose truth doesn't vary over time; and of universal and conditional propositions, such as "where there's smoke there's fire".]
Some thoughts on propagators… seemed like an a-ha moment when I wrote it about a month ago, but now that I reread, it seems sort of trivial… oh well, what are blogs for.
The Sussman/Radul propagator framework is described operationally, as a mechanism for manipulating constraint networks and doing constraint propagation, together with dependency-directed backtracking for diagnosing inconsistencies. I want to redescribe it as a mechanism implementing operations that can be described declaratively: to wit, relational algebra.
Relational algebra was invented to provide a clean theory of databases. According to relational algebra, a database is a set of tables, which are physical and mutable. Each table carries, at any given point in time, a relation, which is ideal and mathematical. When a table is updated, it goes from carrying one relation to carrying another (usually very similar) relation.
Relations in the broader mathematical sense are uniform, potentially infinite sets of tuples. (“Uniform” meaning the tuples all have the same arity, making the relation rectangular, i.e. a subset of a cross product.) In a relational DBMS the relations held by tables can be given by enumerating their tuples. But there is nothing in the theory of relational algebra that requires that relations be enumerable. Relational algebra still makes logical sense with infinite relations, e.g. the less-than relation.
Now the deep piece of relational algebra is relational compositions or joins. Join is a binary operation on relations that yields a relation. The key fact about the join operator is that it’s associative. The most interesting and performance-critical part of any DBMS is the implementation of joins – specifically (a) in what order to perform joins when there’s more than one, (b) for each join, the details of performing that particular join (e.g. enumeration over the left table vs. over the right table). The heart of any DBMS is its query planner, which is the often very hairy compiler-like code that makes these strategic decisions.
So you should see the analogy to propagators. At any given time, every propagator network, including individual propagators and cells as special cases, has an associated relation (e.g., the ternary is-the-sum-of relation). (A cell’s associated relation is unary i.e. a set.)
The propagator network as a data structure changes in two ways: the relation that is expressed (i.e. the set of constraints – values in cells and so on) can be updated, and knowledge about the solution of a particular set of constraints (the propagated implications of the expressed constraints) can change.
Connecting propagators to cells is a side-effecty way of constructing new relational join expressions and therefore expressing new relations. The purpose of the propagation loop is to converge toward the relational join expressed by the network.
The readout of the knowledge state of the cells gives an approximation of the true solution to the constraint set. The would be the cross product of all of the sets expressed by each cell’s knowledge state (e.g. an interval), which contains the true solution relation. In the special case of the most successful possible outcome of constraint satisfaction, the knowledge states of the cells give exact single values for each cell, and the cross product approximation is the true relation, which is just a single tuple. This is the sense in which a propagator network can evaluate functions.
If a relation becomes known to be empty, we have a contradiction, and the TMS can help us figure out why.
So how does this comparison help?
- By showing connections to familiar frameworks (relations and relational algebra), a presentation like this might help in communicating the propagator idea to people (like me) who like declarative and mathematical formulations of things, and might convince them to help join the effort to develop the ideas.
- By thinking about the relations involved one might be helped to think about convergence conditions, case analysis, and other hard problems, and transfer mathematical intuitions developed elsewhere (topology, real analysis, compiler construction, database theory…) to the development of better propagator mechanisms.
Today is (or would be) Preston C. Hammer‘s 100th birthday.
I found a great photo of him on the eBay web site a few days ago, but it seems to be gone now. So it goes.
Today would be a good day to rethink how many empty sets there are, to thoughtfully consider how we reuse everyday words in technical settings, and to remember that our technical work takes place in a broader cultural and societal setting.
Many of his writings are on the web; an incomplete list is linked above. Here’s one I liked: http://mumble.net/~jar/articles/hammer-mind-pollution.pdf
I’m amazed at how much good music there is on youtube – not something I would have expected, given that (a) youtube is supposed to be about videos (b) the copyright holders like to milk recordings for $ … I discovered a trove of Conlon Nancarrow (top search hit, they’re all fantastic) a while back, and today have been listening to some Stockhausen (got to the Helicopter String Quartet via suggestions from Death Metal 4:33, which I found thanks to Bill Tozier), and from there the Rite of Spring … I see Carmina Burana is there too, Messiaen, Penderecki, on and on… Here’s Yo-yo Ma playing the Faure Elegie (although I like the piano accompaniment better than orchestra – voila!) … Let’s see, as a test, here’s an obscure piece that, last I checked, had no commercially available recording: Hindemith’s Sonata for Cello Solo, which I used to try to play… Maybe I’ll become interested in listening to music again?
This post is a continuation of the previous one.
We tend to think of some messages as declarative and others as imperative. For example, a message from a sensor tells the recipient what is sensed, so it seems to be declarative, while a message to an effector tells the recipient what it is supposed to do, so it seems to be imperative.
In symmetric situations, like the contrived example (see previously), the same message might seem declarative when seen from the sender’s point of view, but imperative from the receiver’s.
When designing (or verifying) a system one specifies what is supposed to happen when messages are transmitted over a channel, that is, which messages are supposed to be (or may be) sent by the sender, and/or what the receiver is supposed to do on receiving messages. The desired correlation between system parts is decomposed into one correlation between sender state and messages sent, and a second correlation between messages received and receiver state. The specification of one or the other of these is often called an interface specification. An interface specification either says what the receiver of a message is supposed to “do”, as a function of the message and the receiver’s state, or what the sender is supposed to send, as a function of its state.
This is all pedantic and maybe obvious, but thinking in this way has helped me understand a puzzle. When I was on the W3C TAG we had occasion to talk about both HTML and RDF (at different times). When RDF came up sometimes someone would speak about what “RDF processors” might do, as if such things were somehow analogous to “HTML processors”. This always struck me as very odd and somehow troublesome, but I could never articulate why. I think the reason the phrase “RDF processor” is jarring is clear now. HTML is an imperative language; its specification tells browsers (or processors) what they should do to correctly interpret an HTML message (document). The generator of the message is relatively unconstrained by the specification – it can send any message it likes, as far as HTML is concerned. RDF and its vocabularies are declarative; their specifications tell curators (or generators) what they should do to generate correct RDF. The consumer of the message is relatively unconstrained by the specification. So speaking of “RDF processors” in a discussion of web standards is as peculiar as speaking of “HTML generators”. Standards have almost nothing to say about what either kind of thing is supposed to do. Their correctness is not articulated by any standard, but rather is idiosyncratic to the particular system in which they’re embedded.
This leads to an odd but I think illuminating way of looking at RDF. What does it mean for something to conform to, say, the Dublin Core vocabulary? This is another puzzle that has been bothering me. RDF being declarative, the specification would apply to generators of DC-containing RDF documents, not to consumers. The Dublin Core spec says (in effect) that you shouldn’t write ‘X dc:creator Y.’ unless what ‘X’ means is the creator of ‘Y’ means, and so on. Because this condition is not usually decidable by machine, the “should” (conformance condition) applies not to a piece of software, but to an overall curation process, a system with human parts and mechanical parts, that leads to the generation of such a statement.
- There may be further installments, but I make no promises.