Diagrams/Dev/Constraints
Design of diagrams-constraints
An effort to show that my GSOC project was successful.
What other libraries did you consider? Why sbv?[edit]
- See mailing list discussion: https://groups.google.com/forum/#!topic/diagrams-discuss/DPaSbHqp4N4
- After that, I had more discussion with cmears: http://ircbrowse.net/browse/diagrams?events_page=1386
- Summary: Many solvers were considered, sourced from googling "constraint solving library" and from going through Hackage. Most of the "solvers" were limited to linear constraints. Two were not: SMT solvers, and GECODE. The haskell bindings (monadiccp-gecode) for gecode are basically unmaintained, as told to me by the listed maintainer. SMT solvers are being actively researched by various people (Microsoft, SRI International, etc.) and show no signs of going away. Sbv is a well-maintained binding that abstracts over the SMT solvers (using SMTLIB, a well-defined format). Z3 in particular seems to be developing nonlinear solving techniques, and there was a research project that used a combination of Z3 and Mathematica to solve extremely complex systems. It also supported the SMTLIB format. My conclusion was thus that sbv is/was going to be around for a while, and will (through the efforts of the SMT solver people) gain many useful constraint-solving algorithms, so it was a good library to use.
Why a type family in -lib rather than a polymorphic V2 type?[edit]
- The real person you would have to ask that question is byorgey. He wrote the V type family: https://github.com/diagrams/diagrams-core/blob/75e7f64caa7d0bd6e384bd5316b6e5e6754b1f1d/src/Graphics/Rendering/Diagrams/V.hs. He also used the Scalar type from vector: https://github.com/diagrams/diagrams-core/blob/75e7f64caa7d0bd6e384bd5316b6e5e6754b1f1d/src/Graphics/Rendering/Diagrams/Points.hs#L59. All I did was propagate that idiom into all of diagrams-lib. Indeed, I recommended not using Scalar in https://github.com/diagrams/diagrams-core/pull/65, and that has been looked at with suspicion. So the type families are here to stay.
- The question that could be asked, however, is:
Why not simply parameterize over the scalar value and fix our two-dimensional vector type to be V2?[edit]
- The existing datatype was packed, and I though that should be preserved (and indeed it is, in Diagrams.TwoD.Types.Double). That was what I put in my proposal (or its comments, rather), and so that was what I implemented. It wasn't really any effort, since there was already a type family (V) to use. The diagrams types are already highly polymorphic, so adding more polymorphism shouldn't hurt (much).
- Later, someone told me on IRC that the packed datatype wasn't that useful. Although the type family does let R2 be packed, it doesn't let us pack the other data types, such as SizeSpec2D. Looking at libraries such as vector and text, it seems that the general approach for packing types is to create a data family of packed types, create marshaling/unmarshalling functions, and then optimise them out (a process known as "fusion"). If we go the fusion route, then the packed R2 types are unnecessary (the data family takes care of them) and so all we need is V2. Terminology note: since e.g. String -> ByteString is called 'pack', "packed" datatypes are those that use unboxed types. This might appear to conflict with current GHC terminology which uses the UNPACK pragma to signify unboxing. However, the opposite of UNPACK is NOUNPACK, not PACK, so they are technically not in conflict. I suggest removing the "UN" part so that the pragmas are PACK and NOPACK respectively.
- Suppose we did change out the parameters of v for parameters of s and change the usages of v to usages of (V2 s). The V type family will remain in use for other reasons, such as abstracting over 2D vs 3D, so now the library has different API's for 2D, 3D, and generic code. But it already did (generic vs. R2Ish vs. R2Ish). The (R2Ish v) constraints change to (ScalarIsh (V s)) constraints. But writing them over and over again is boring, so we define R2Ish again. Except this time, since V2 / V3 are in the types already, we can actually just define RIsh (no dimension). So all that actually needs to change are the data types and the constraint synonym name.
- So now I'm thinking it's possible and useful to just always use V2. In particular, I can't write the generic Renderable instance unless all types have the same scalar type parameter as their last type argument, which using V2 uniformly makes easy but using V makes hard.
What is R2Ish other than R2?[edit]
- I have Float and Generic (V2) modules in my branch. R2 is Double-specific, Float.R2 is float-specific, and SDouble's are ADT's (boxed types), thus don't pack, so the generic V2 is sufficient.
- It also turns out that the R2Basis :>: a type (defined for HasTrie R2Basis) is R2Ish, although I didn't write the instances. In fact, we could just define type V2 a = R2Basis :>: a. I'm not certain this gets us anything besides some accessors (untrie f XB and untrie f YB for accessing x/y components resp. of V2). We already have Coordinates and HasBasis though, so IMO not much point in it.
Why does V2 then pop up in diagrams-constraints?[edit]
- V2 shows up in diagrams-constraints because it uses boxed types. Similarly, R2 shows up in diagrams-svg because it uses Doubles. Etc.
Why structure the constraint-solving as a Diagrams Backend?[edit]
- renderDia requires a diagrams Backend to work. I wanted the API as high-level as possible, i.e. using renderDia, so that meant writing a backend.
flexibility in supporting only some Prims?[edit]
- Yeah, that's another reason. But I should be able to just write a generic (Renderable (f Double) b) => Renderable (f CDouble) (Constraint b), after doing a fair amount of type hackery. In the meantime I can have my current "Circle" and "CPrim" types. Circle is quite hacky and will go away after the generic instance, but CPrim is a necessary part of the constraint-solving API and so should stay. The easiest way to quantify over "CPrim and everything renderable by the parent backend" is to use a typeclass and the machinery that diagrams-{core,lib} supplies.
What do the various types in Constraint.hs represent?[edit]
- CBool, CPrim, Circle, etc -- see the haddocks
What are some next steps to make diagrams-constraint useful? (ideally a bunch of small, specific tasks)[edit]
The main things is rewriting diagrams-core to be friendlier to constraints
- byorgey has the fixed-point branch
- new NamedPrim constructor so primitives can use/output names - could be useful for e.g. <a name="..."> in SVG as well as for constraint stuff
- adjustments to Attributes; they should use lenses rather than Data, and just be pure Scalar v's instead of Measure's. Measures can be done by replacing the number data type (similar to what I do in diagrams-constraints with CDouble - basically, you stack backends instead of hacking in special cases like Measure). Of course, the constraint solver copes just fine with things like "the diagram's width is the sum of the widths of these circles, and each circle's width is 1/x of the diagram's width", while Measure probably does not, but I don't think it will be any worse than what we have now.
- Handle traces (which currently output a SortedList, requiring Ord) better. Haskell lists don't translate to sbv nicely, so maybe add a new datatype TraceResult with firstIntercept/lastIntercept methods (removing SortedList). I'd have to check the usage, but I think we can get away with just those two in most of diagrams-lib...
- After that, write the generic Renderable instance as above and the constraints backend will be very low-maintenance (updating only with sbv or with diagrams-core).
- Some more tasks would be some "nice" primitives for simple objects such as rectangles, which are much easier for constraint solvers to deal with than general paths. In particular, the current diagrams-lib API means that all circles must be rendered first to Path's, losing information (and also adding an Ord constraint since the discretization process is numerical). Ideally, the backends would do the discretization and Arc would be a rendering primitive rather than a function that returns a path (arcT). Or we could add conic sections as Segments, and just generalize our notion of "Path".
What sorts of constraints are easy to support?[edit]
- linear? - supported in Z3, I implemented a few (alignAxis, spacingX/Y)
- linear inequalities? - they should be supported as well, but I haven't implemented anything that uses them.
- square / sqrt / distance? - also supported (less well), again I haven't done anything.
- sin / cos / angle? -- not supported in solvers ATM (although constant computations like sin pi work, thanks to sbv and my patches to it)
- constraints on Transformations? - I think you can write transformations with variables and then constrain the variables. So it should work already (Although I haven't actually tried it...)
- constraints on Measured Attributes? - Measured Attributes currently have a Data instance requirement. But CDouble can't do that, since it's a wrapper around IO, and all the functions in Data are pure. But CDouble's are actually more powerful than Measure (min/max/etc. are all first-class). So I think Measure and the Data constraint should just disappear (but that requires more diagrams-core changes)
- constraints on offsets / vectors in Paths? - I don't actually render Paths yet, due to the aforementioned Attribute problem.
- Which of the above are probably hard?
- Doing transcendental stuff like sin/cos constraints is hard, because (a) the mathematical solving is tricky and (b) you'd have to standardize it in SMTLIB somehow. But once it's implemented in your SMT solver of choice (Z3, YICES, etc.) then it's relatively easy to add to sbv, and then diagrams-constraints should just work (maybe after a few tweaks to the CDouble instances). Everything else is just coding. The diagrams-core rewrite is the hardest, and adding new primitives is the easiest.