Hints and Discussion for:

Homework 3: Interactors

Due: Thursday, October 25 **NOTE: EXTENDED DEADLINE**

05-631: Software Architecture for User Interfaces , Fall, 2001

The InteractiveWindowGroup will have handlers for all the input event that come from Java.

Lets make the simplifying assumption that there can only be one interactor running at a time. This means that when an Interactor is already running, the InteractiveWindowGroup can check each event to see if it is the stop event for that running interactor, and if so, call stop on the interactor, and otherwise call running on the interactor. If it calls stop, then it should check using getState after stop returns to see if the interactor is no longer running (if state == IDLE).

When no interactor is running, InteractiveWindowGroup will need to go through all the interactors for each input event to see if any interactor wants to start. The InteractiveWindowGroup should do the checking for the input event for each interactor (using the getStartEvent method of the interactor and comparing to the event from Java). If the event matches, the InteractiveWindowGroup should let the interactor itself do the hit testing, since, for example, ChoiceInteractors only should start if the start-event happens over a graphical object that implements the Selectable interface. Therefore, the start() method of the interactor should be called if the event matches, and the start method should first check the position to see if the interactor should actually start. Therefore, InteractiveWindowGroup will need to check after calling start() whether the interactor started or not, using getState() on the interactor.

When testing for starting, interactors should check to see if the event is inside any of the children of their attached group (note: not the group itself). Thus, all interactors operate over the immediate children of the group. If you need an interactor to operate on a single object, you would have to add it to an extra level of group.

Note also that you have to hit-test the children returned by getChildren() in reverse order (front most to back most). Fortunately that's easy to do with List.listIterator().

All Events include mouse pointer information, even keystrokes. That's a difference from Java -- KeyEvents in Java don't include the current mouse position. So InteractiveWindowGroup has to store the mouse position in order to add it to keystroke events.


> I've written up part of the code for MoveInteractor and it works but
> i'm facing the following problem when using InteractiveWindowGroup. I
> have the following test code

> InteractiveWindowGroup w = new
> InteractiveWindowGroup("Test",400,400);
> Interactor inter = new MoveInteractor();
> SimpleGroup simple = new SimpleGroup(0, 0, 200, 200);
> simple.addChild(new Rect(0,0,40,40,Color.red,1));
> simple.addChild(new FilledRect(50,10,40,40,Color.blue));
> simple.addChild(new Rect(10,90,60,40,Color.magenta,1));

> inter.setGroup(simple);
> inter.setStartAnywhere(false);
> inter.setStartEvent(new Event(Event.MOUSE_DOWN,
> InputEvent.BUTTON1_MASK, 0, 0, 0));
> inter.setStopEvent(new Event(Event.MOUSE_UP, InputEvent.BUTTON1_MASK,
> 0, 0, 0));
> w.addInteractor(inter);
> w.addChild(simple);

> First of all, is this the right approach? Should I be setting the start
> and stop events here or should MoveInteractor set its own start and
> stop events automatically in the constructor?

This seems fine to me. Yes, the user of an Interactor should set its events, just as your code does. You should have your interactors set reasonable default events in its constructor, but our tests won't count on it. We'll use setStartEvent and setStopEvent to configure it, just like you do.

> Secondly, in WindowGroup, simple gets drawn on "private JComponent
> canvas" . Now in InteractiveWindowGroup, when I do register for the
> events, I get the x, y location wrt the entire frame, and not the
> canvas. There doesn't seem anyway for me to translate those coordinates
> to coordinates wrt the canvas. Am I missing something?

This shouldn't be a problem, because the canvas is always at 0,0 relative to the frame.


> The canvas is actually being placed under the title bar and like we
> discovered in the first assignment, that position would be something
> like 4,23 and not 0,0. If we can somehow get the position of the
> canvas, then we can translate accordingly in InteractiveWindowGroup.

The portable way to find out how much the canvas is translated relative to the whole frame is to call getInsets() on your InteractiveWindowGroup. The left and top coordinates of the insets are the x,y position of the upper left corner of the window client area, which is always the upper left corner of the canvas. On Windows, you'll get 4,23. On Linux, you'll get 0,0.


> Is it alright if we make NewInteractor an abstract class?
> You don't have that in the code in the assignment, but I can't make it
> compile without that.

Yes, make it an abstract class. That was just an oversight.


> For the Group classes, should contains() return true only if contains()
> returns true for one of its children? I think the answer to this
> question should be yes; that way Group.contains() can be used in the
> Interactor.start() method to determine if the event is consider "inside"
> or "outside". I wanted to double check with you first.

I think this could go either way. In fact, Amulet (and some other toolkits) provide a setting on groups to determine what behavior you get! One possibility is that Group.contains() should return true if and only if the point is inside the group's bounding box. This is clearly the easiest to implement. If the caller cares about whether the point is inside a child, it's up to the caller to enumerate them and call contains() on each one. Otherwise every call to contains() has to drill down through the entire group hierarchy. On the other hand, groups are supposed to be transparent, so it seems like it should be possible to click through a transparent part of a group and select the object underneath it. So it could go either way.


> What happens when an Interactor is assigned to a Group which is inside of
> another Group? How can we do hit testing? The input that the Interactor
> receives is the mouse position relative to the canvas or frame. However,
> the Group to which the Interactor is assigned only knows about its
> own local coordinates. Are we supposed to manually retrieve and sum each
> of the Group's parent and ancestors' offsets so that we can compare this to
> the mouse position as given by the Java MouseListeners?

Yes, you are. Every GUI toolkit has to do the same thing. Aside from the ScaledGroup problem (see below), all the information you need to do the coordinate transform is available.


> Do we need to have ability to select Layout/Scaled/Simple Group in drawing
> editor? I'm assuming we don't and the objects get drawn as if they are
> part of a SimpleGroup. Is this correct?

Correct. You might use a layout group for the controls, like a palette to control which drawing mode, or to pick the color of drawing. But the main drawing area should be a simplegroup where the user gets to place the objects.


ScaledGroup:

> I'm having some problems with getting the interactors to work with 
> Scaled/LayoutGroup. The problem happens with hit-testing. Somehow, 
> somewhere I'd need to scale the coordinates that I get form the Event 
> to match up to the scaled coordinates. The ideal place to do this would 
> be ScaledGroup and what I really need in that case is a 'public 
> GraphicalObject contains(int x, int y)'. Because, to do this correctly 
> in an Interactor, I'd have to put conditions like

> if(group instanceof ScaledGroup)
> {
> scale coordinates
> call child.contains(scaledX, scaled Y);
> }

> which doesn't seem quite right to me.

No, you are right, that isn't a good thing to have to do.


> How is MoveInteractor supposed to move objects within a ScaledGroup? The
> only way that MoveInteractor can move objects (using the provided
> interfaces) is to use moveTo().

> I can't see how this can work with objects in a ScaledGroup. Feel
> free to tell me where my reasoning/assumptions/etc. are wrong.

> To my understanding, moveTo() moves objects by changing the internal x and
> y of the object so that it would be drawn at the given point in the
> context of the group which contains it. This means that if I had a
> SimpleGroup at 10,10 and I moved a Rect to 10,10, it would be drawn at
> 10,10. Doing the same move in a ScaledGroup (with scales of 0.5 and
> 0.5) would result in the Rect being drawn at 15,15.

> If this is an incorrect interpretation of moveTo(), (which is possible),
> here is my reasoning for that implementation:
> It is impossible using the interfaces to determine the how to position
> something in the "master" coordinate system. If I wanted to move a Rect
> to 15,15 in the master coordinate system in the ScaledGroup specified
> above, what would I do? I *can* determine the x and y of the parent using
> getBoundingBox() and recursing through all the parents. However, there is
> no way to know of the scaling factor since. Thus, scaling makes any other
> moveTo() interpretation unreasonable.

> If somehow I was wrong and moveTo() was supposed to move objects based on
> the "master" coordinate system, then we can't do moveInteractor because
> moveTo() can't be made to work with the ScaledGroup.

> If I was correct about moveTo(), then ScaledGroup still won't work with
> this Interactor because the interactor has no way of telling whether it's
> children are scaled groups, thus can't know whether it should attept to
> find the scale and adjust the movement magnitude. Basically, if the mouse
> moved 50 pixels to the right, I can have my interactor blindly move the
> object 50 to the right, but if the scale is say 0.5, it would only move 25
> pixels, which is wrong.

The simplest solution is that we don't care how ScaledGroup and MoveInteractor behave together. Behavior unspecified. :-) Except that it shouldn't crash and burn, I suppose. One possibility is that objects move slower (not follow the mouse) if they are in a scaled group.

In a "real" system like Amulet, there are two important additions: The coordinates (like the x and y passed to hit testing) come with a coordinate system they are relative to (e.g., the window or whatever), so each test can tell what the x and y mean. The other important addition, is a TranslateCoordinates method, which is a method implemented in each group, so that coordinates can be interpreted anywhere.

Since we didn't have these in our specification for this assignment, we will leave the behavior of ScaledGroup to whatever you can make work.


> Another concern that I have is the code that we are building on.
> The code that we are writing is partially based on code for the last
> assignment, which we have not yet gotten back. Thus, even if it did pass
> our testing, our implementations may still have some bugs that would
> dramatically affect the success of the current assignment. This is pretty
> unfair, in my eyes, since it introduces the risk of completely
> jeopordizing a students success in this class solely because there was
> something wrong in the second assignment that they missed. Would it be at
> all possible to post a solution at assignment 2 so that we can at least
> verify our implementations?

This is a good point.

Trade offs:

Postponing HW 3:

Please vote on these questions, and/or we can discuss it further in class tomorrow.

--> The final decision was to extend the deadline by one week.


There is a reference implementation for Homework 2 you might want to use. It is based on Samuel Spiro.

The zip file includes TestHomework2.java which I used to test some features of all of the implementations. Note that there may be some features that are still buggy in this implementation. If you find a problem with this implementation, please let me know and I will try to fix them for everybody.


> How can we make it so that Interactors that are hidden behind other opaque
> objects (like a plain Rect) are not activated when someone clicks on them
> (assuming that a click on the Interactor starts it)?

> The only way that I can see for such a check is for the
> InteractiveWindowGroup to get the Interactor's group object and check to
> see if the clicked point is not "caught" by some object above it. Such a
> check would be pretty slow (and also a real chore to implement which is
> why I am asking about it).

This has been a problem with many such models. In Amulet, we have a priority on Interactors as a partial solution, so you can make sure that the Interactor on the "frontmost" objects runs first. 

We didn't completely specify the behavior of obscured objects. For this assignment, hopefully, you can put a single interactor on a GROUP of objects, so you won't need different interactors at the same time. Also, you can disable and re-enable interactors based on global modes, by removing and re-attaching them. Alternatively, you can have different behaviors be on different start events (e.g., move on right down, select on left down, draw on shift-left-down) so this problem might not come up.

Another solution is to change the way you start interactors -- instead of  scanning through a list of interactors in some arbitrary order, you drill  down through the object hierarchy for objects that contain the mouse cursor.  You scan children in front-to-back order, to give obscuring objects a chance  at the event first. As soon as you reach a group with an interactor that matches the start event, you start it and stop looking for other interactors.

Start-anywhere interactors would have to be scanned separately, of course.


> For the Group classes, should contains() return true only if contains()
> returns true for one of its children? I think the answer to this
> question should be yes; that way Group.contains() can be used in the
> Interactor.start() method to determine if the event is consider "inside"
> or "outside". I wanted to double check with you first.

We answered this just recently: either way is OK. Probably it is easier just to do a bounding-box check or a group.


> For the DrawingEditor, when you say "moving graphical objects around", do 
> you mean only move one at a time around, or do you mean move the selected 
> objects around all at once?

Whereas the ChoiceInteractor itself must support multiple selections, it is OK if the user interface for your drawing editor only supports single selection (selecting only one object at a time). That is, the parameter passed to the ChoiceInteractor you use for selecting graphical objects in your drawing editor can use SINGLE.


> Could you give an example of when startAnywhere would be true? Most of the 
> examples I've thought of set startAnywhere to false.

I agree these aren't that useful. Even the NewInteractor would want you to click in the background group. In Amulet, we had the debugging interactor that popped up the inspector, and an interactor that supported multiple users, that was always on to show the other person's actions. 


> Is there supposed to be any visual difference on the Selectable 
> GraphicalObject when it is interimSelected vs. selected?

Normally there wouldn't be. In "real" selection handles, as we discussed in class, the behavior is not continuous, so there wouldn't be an interimSelected anyway. However, for this assignment, it would be nice to have the Selectable GraphicalObject show interim feedback that looks different.


> I assume that for the NewRectInteractor class, you want the rectangle to 
> flip if the mouse moves to the left or up while in the running state. Is 
> that correct?

Sure, good idea.


> Could you please clarify how the ChoiceInteractor is supposed to behave?
> What do you mean when you say that the ChoiceInteractor should update the
> interim selection as the mouse moves around? Are you saying that the
> Interactor should process any object that it passes while rolling over?

> Intuitively (my intuition of course), I would think that the Interactor
> would never need to be in the running state, only adding selections at the
> start event and immediately going idle after that action.

No, the idea is that the ChoiceInteractor only shows the interim feedback after the start event happens, while you are in the running state. Thus, you would press down (for example) and then get feedback as you moved around, until the mouse release. Note that this is NOT like Windows 2000 menu bars which highlight as you move over the items even before you press a mounse button.


> Can we assume that only SelectionHandles objects will be attached to the
> ChoiceInteractor, or do we need to provide a method for ChoiceInteractor
> to encapsulate incoming groups into SelectionHandles objects so that there
> can be selection feedback?

ChoiceInteractors should work with any class that implements the Selectable interface. This includes SelectionHandles but maybe also other objects. For example, you might want to create your own buttons or menus using a ChoiceInteractor (e.g. for selecting the color of objects in the drawing editor), and your own implementation will have a different object implement the Selectable interface in a way that is appropriate to menus or buttons rather than for selecting graphical objects in an editor.


> On a related note, how should LayoutGroup handle Interactors, specifically
> the NewInteractor? Can we treat this like ScaledGroup, and as long as it
> doesn't crash, it's okay? Or do you have a particular behavior which you
> are looking for?

LayoutGroup should definitely handle ChoiceInteractor, but it is OK if you don't bother with MoveInteractor and NewInteactor for LayoutGroups. So, yes, you can treat it like ScaledGroup.


> I can't think of any sensible behavior when either ChoiceInteractor, 
> MoveInteractor, or NewInteractor has startAnywhere set to true, and when 
> the Interactor starts outside. In class, I believe you said that when 
> NewInteractor's or MoveInteractor's state is RUNNING_OUTSIDE, it should 
> abort. In the case when these Interactors start out by RUNNING_OUTSIDE, it 
> would immediately abort anyway. Furthermore, it doesn't make much sense 
> for ChoiceInteractor to start in the RUNNING_OUTSIDE position. How would 
> you know which Object to turn on?

This is definitely NOT what I said. I said when the STOP EVENT happens while outside, the interactor should abort. When the interactor is outside, that means the interactor isn't showing the highlighting, but it would still be running. For ChoiceInteractor with firstOnly = false, then starting over no object make sense--it will just start highlighting whatever object the mouse passes over. Similarly for a NewInteractor, you could start creating an object while outside of the group. A MoveInteractor that operates on a group of objects doesn't make too much sense, since you wouldn't know which object to move. 

> Therefore, is it okay if we disallow the designer to set startAnywhere to 
> true for these Interactors?

No, then there wouldn't have been any point in having it. However, startAnywhere will be a small part of the grade, so it would be appropriate to leave it for last.


Many people noticed that we did not have the appropriate methods for Homework 3 for you to correctly handle scaled groups. In particular, there was no way to translate the coordinates from one group to another. To make this possible, we added two new methods to the definition of groups:

public Point parentToChild (Point pt);
public Point childToParent (Point pt);

and we supplied implementations of these in the reference implementation for Homework 2.

If you want to make the interactors work correctly for scaled groups, you can use these methods. Since this is a pretty late change to the assignment, you can consider this to be an extra credit part of the assignment.

Here is the definition of these methods, which is also now in Homework_3.html:

parentToChild() and childToParent() translate between the group's coordinate system and its parent's coordinate system. parentToChild() takes a point in the parent coordinate system and maps it down to the group's coordinate system. For example, if the group is located at (5,10) in its parent, then parentToChild(Point(5,10)) should return Point(0,0). Similarly, childToParent() maps a point in the group's coordinate system up to the parent coordinate system. Most groups will implement these methods as simple translations, but ScaledGroups should take scaling into account as well. (Note: since Point represents integer coordinates, you'll lose some precision if you put a ScaledGroup inside another ScaledGroup. If accurate scaling were important to our interactors, we'd want to use floating-point coordinates.) Example implementations of parentToChild() and childToParent() can be found in the reference implementation for Homework 2.


> Can't send events to Interactors using Group's coords:
> I could assume that all Group objects have getX() and getY(), except they
> aren't in the interface. I initially used the bounding box, but that is
> wrong too, since if a group does not have an object in its upper left,
> then the bounding box would not start at the group's origin
> "getBoundingBox() returns the smallest rectangle that contains all the
> pixel drawn by the graphical object" (from assignment 2).

getBoundingBox() is supposed to use its own origin and size as the parameters of the bounding box. The quote about the "smallest rectangle" was directed at things like lines and outline rectangles, where the graphics may go outside of the coordinates used to define the object.

> I'm just going to blindly assume that all Groups happen to have a getX()
> and getY(), since there is no real way to get this otherwise. Of course,
> if you want me to use boundingBox, just let me know and I'll change back
> to my original code. I just wouldn't be able to easily test anything
> though since such an assumption would make my Groups all wrong.

It is OK to add a getX and getY to the Group interface. You should probably then also add getWidth and getHeight.


> for ChoiceInteractor, I was thinking about how to make it so the user could 
> select multiple items using CTRL+Click, and single items using an unmasked 
> click. However, in order to make that work, I would first need two 
> interactors, one initialized to SINGLE/TOGGLE, the other to MULTIPLE, but 
> the hard part would to make them exchange the selection.. there is the 
> getSelection method, but say I select a few items with the multiple-select 
> interactor, and then click a single object with my single interactor- I 
> need the single interactor to deselect those items selected by the multiple 
> interactor. I was thinking about using a static array/collection/whatever 
> inside ChoiceInteractor to do this.. possibly would have a hash table with 
> a list of the selection as a value, and the group of the interactor as the 
> key (in case there were multiple drawing canvasses, I wouldn't want a 
> single interactor to deselect items in another drawing canvas). Is this ok 
> to do, or am I headed down the wrong road?

This might be one way to achieve this. I think for this kind of behavior, it might be better to make a new choice-interactor class. For example, you might make a ChoiceTwoInteractor as a subclass of ChoiceInteractor that maybe has two start events and two corresponding "types", one for each start event. Then you could have LEFTDOWN map to SINGLE and SHIFT_LEFT map to LIST_TOGGLE, or whatever. This is just an idea. Feel free to make up a better design.


> I'm trying to figure out how to implement the move interactor, by thinking 
> about how it would function with the ChoiceInteractor.

> So I think what I'm going to do is set the start event for ChoiceInteractor 
> to be a button1_down, and the start event for the MoveInteractor to be 
> mouse_move+button1_down. The problem is, is that I think the way you want 
> us to be interactors is to getChildren() from the Interactor's group, and 
> then look for something to operate on.

> The problem is, is that I think my start events correctly describe the 
> desired behavior (when you click the button down, it becomes selected, and 
> then when you start moving the mouse the object moves, which is what 
> drawing programs do... some may not initially select the item if the mouse 
> is held down too long), but there is a corner case where, using the 
> interactor model of the homework, where the wrong object would be selected. 
> Say I had 2 objects, one behind the other. I click on the object behind 
> (and it was correctly selected), and try to move it in the direction of the 
> other object, i.e. the mouse would be right on the border of their 
> intersection initially, and the first motion would put it on the object in 
> front. Then, the object in front (not the one that I tried to select) would 
> end up getting moved, because the ChoiceInteractor would finish, and the 
> MoveInteractor would start. However, because the MoveInteractor doesn't 
> take into account the selection made by the ChoiceInteractor, it would 
> simply scan the children of the group, and would find itself over the item 
> in front (not the item that was just selected).

> One possible solution I see to this is to have a Group that represents the 
> selected items (but is not a Selectable interface object- mainly just 
> something to limit what children MoveInteractor scans). However, this is 
> clearly out of spec operation, because a child would not be able to send 
> damage to both groups. I could, however, create a special purpose group to 
> pass on damage to another group, while also holding the children. I guess 
> the function of this group would not need to implement any draw functions, 
> but basically it'd be used to "hold" selected items in a virtual group, 
> while those children still belonged to their real drawn groups. I can't 
> remove the children from the drawn group and re-add them because then the 
> layer order would be wrong.

> So, basically, I think I have a solution, but it involves hacking the 
> interfaces.. is there a better way to do this that I'm overlooking?

You are apparently trying to make the drawing editor select and move objects in the "standard" way that almost all of today's editors work. The interactor and event model described in the homework does not really support this, as you are finding out. 

One solution is to just make a different, simpler design, say where one needs to select different tools in the palette (different modes) to select objects than to move objects. Or you could have different events for select vs. move (e.g., left down for select and right down for move).

The way that Amulet allows interactors to handle the standard select and move is by having two special input events that Interactors can start on: CLICK and DRAG. The DRAG event is only raised when the mouse is pressed down and moved. To solve the problem you raise about the wrong object, Amulet actually saves the object under the mouse at the time of the downpress and then waits to see if the mouse is moved. If it is moved, then the original object is noted with the DRAG event. If the mouse is not moved, and a button up is seen, then the CLICK event is raised instead, again with the original object. Then, the Choice Interactor is non-continuous, and starts on left down (so the object always becomes selected), and the move interactor starts on left-drag. Nothing is assigned to CLICK (so if there isn't a drag, the object is just selected). If you want to implement CLICK and DRAG events, that would definitely be worth extra credit on this assignment.


> I'm still not entirely clear on startAnywhere- is it something that is per 
> interactor, like a moveInteractor would just always have startAnywhere = 
> false, or is it a attribute (that does not show up in any of the 
> constructors..) settable by the user after the creation of the interactor?

You might notice that StartAnywhere and the other parameters for interactors that you ask about below all have both accessors and set methods. Therefore, you can assume that the set method might be called at some point. If you want, you can assume that the parameters won't be changed while the interactor is running.

> And what does it mean to be a startAnywhere interactor? Is it, assuming you 
> could set a MoveInteractor's startAnywhere property to be true, mean that 
> you wouldn't have to be over a GraphicalObject to start the interactor 
> (meaning that any tests regarding the event.x & event.y are ignored), or 
> that the entire startEvent is ignored, and the interactor just starts as 
> soon as the start() method is called?

Lets assume that startAnywhere only affects the start LOCATION, and NOT the start event. Therefore, the interactor still won't start until the start event happens, but it won't check the location.

In Amulet, we had separate ways to say the start event and start locations were to be ignored, but for this assignment, we only had you deal with the start location.

> Also, I am unclear on how you want us to implement the 
> start()/running()/stop() interface. Should the InteractiveWindowGroup test 
> the Interactor with something like 
> event.matches(interactor.getStartEvent()), or do we just call 
> interactor.start(event) and check interactor.getState() after? Personally, 
> I think the latter is easier.. Then once the interactor is running, we 
> would just call running(event) on it, and if it decided to stop, then it 
> would just stop, and we would know by doing interactor.getState() after the 
> call to running().
>
> Does it even make a difference for grading?

I think I answered this adequately in the first hint of this file (above).

> And, regarding ChoiceInteractor, should firstOnly and type be 
> user-changeable or read-only after instantiation? And would we have to 
> worry about knucklehead users/programmers who change these while the 
> interactor is running?

firstOnly does not appear to have a set method, therefore one wouldn't be able to change it.


> What is ChoiceInteractor's "area of interest"? I can think of two 
> reasonable answers. One is the Selectable Graphical Object's bounding box, 
> and another is the bounding box of the parent. I don't think was specified 
> in the homework.

Assuming getStartAnywhere is false, the ChoiceInteractor should start if the start event happens while the cursor is over any of the objects in the group the ChoiceInteractor is attached to. Similarly with MoveInteractor. It should NOT start if the cursor is inside the group's bounding box but not over any objects in the group. 

Note that the objects in the group should implement the Selectable interface. The group itself that the interactor is associated with does not need to implement Selectable.

For while running, for Choice Interactors, the appropriate "area of interest" that is used to determine inside vs. outside depends on the parameter firstOnly. (Do you see why?) 

In general, you can use the bounding box of the group as the area for the inside/outside check.


New Hints with respect to TestHomework3.java

>  I'm pretty sure I remember you saying it may be necessary to wrap things 
> in a wrapper (something that implements Selectable) for our drawing 
> editor.. I did this by doing it in NewInteractor. However, this doesn't 
> seem to be something that agrees entirely with TestHomework3's myline 
> class.. Should I move it (the part that wraps the new object in a 
> selectable container) to somewhere else like the drawing editor's top-level 
> group? Or should a selectable recursively properties related to being 
> selected to its children that implement Selectable?

I think it is pretty clear in the original HW3 description: NewInteractor's make is abstract and doesn't do anything. As it says in HW3:
-- Note that your NewLineInteractor and NewRectInteractor
-- don't have to create pure Line and Rect objects. You may
-- want to create lines wrapped inside a SelectionHandles group,
-- or perhaps a subclass of Line that implements Selectable
-- and draws selection-handle feedback itself.
The testhomework3 code uses a subclass of NewInteractor and not a subclass of NewLineInteractor for that reason.

> For ChoiceInteractor, the selections of the different instances are not 
> shared, and there is no setSelection method. So, should ChoiceInteractor be 
> scanning all children in its group to see who's selected/not selected when 
> it starts? If multiple objects are selected with an interactor conforming 
> to MULTIPLE, and then an object is selected with the SINGLE or TOGGLE 
> interactor, then those other objects selected with the MULTIPLE interactor 
> will remain selected. Likewise, if the MULTIPLE choice interactor starts up 
> again, its list will not be up to date if objects were deselected/added 
> using other types of interactors.

I specifically did NOT take points off if different ChoiceInteractors did not coordinate their selection lists. So it is OK if a ChoiceInteractor only works off of its own internal list of selections. In testing, I made sure the selection was empty before testing each ChoiceInteractor's behavior. It is NOT necessary for a ChoiceInteractor to iterate through the objects themselves to make sure they are set correctly, although it is obviously MORE CORRECT to do this. If you were handling correctly the case of multiple choice interactors working across the same sets of objects and you want extra credit for this, let me know.


> I'm having trouble with the custom mynewlineinteractor.
> What is "super(false)" suppose to do in mynewlineinteractor's constructor?

That calls the NewInteractor's constructor:
    public NewInteractor (boolean onePoint);
which is supposed to initialize the NewInteractor.

> I had to modify my own NewInteractor class to default to handle Lines in
> order to work with mynewlineinteractor. Is that OK?

No, this is not OK. NewInteractor should implement the standard Interactor methods (such as getGroup, start, etc.), store the value of onePoint, and leave the graphicalObject-specific processing to make and resize which are only defined in subclasses of NewInteractor.

> My current code allows drawing "mylines" in the test program, but provides
> no selected/interim selected feedback for them. Other Lines and Rects give
> correct selection feedback. Does this mean I need to modify the Selectable
> class to work with mylines? I thought Selectable was just an interface?

If you look in the code, mylines are supposed to change color when you click on them: changing among black=both interim and sel, green=sel only, blue=not sel or interim, red=interim only. There shouldn't be any handles shown for mylines.


> I'm having a little trouble making the NewInteractor work properly when 
> subclasses implement wrapping their newly created objects in 
> SelectionHandle objects.

> Originally, I just had the state machine (which is implemented inside 
> NewInteractor) enclose the newly created object in a SelectionHandles when 
> the interactor stopped. However, this obviously doesn't work for something 
> like the myline class.

> So, I've been trying to get around this by always working with lines and 
> rects that are contained inside SelectionHandles. The problem with this is 
> that now resizing is basically impossible to do (with the defined 
> interface). If NewLineInteractor.make() returns a SelectionHandles object, 
> which draws children in its own coordinate system, as mine does, then I 
> can't get resize() to work correctly- because my SelectionHandles leaves a 
> little border space around its children to draw the handles, I can 
> compensate for the SelectionHandle's x&y, and put the coordinates in terms 
> of the inside of the SelectionHandle group, but there is no way to 
> externally know how much to compensate for the border space inside the 
> SelectionHandles. Basically what happens is that the first time you move 
> the mouse, the object jumps a little bit, equal to the border space in the 
> upper-left corner.

> Of course, there are other solutions, like removing the line from the 
> SelectionHandles group, resizing it, then re-adding it, but I don't think 
> this is a very efficient way to solve this problem.

> Now, if the interface was a little different, this would be very easy to 
> solve- for example, if there was a abstract finalize() method to be called 
> when the interactor was stopped.

> So, is there some easy solution I have just overlooked, should I just do 
> the poor (but correct and pretty easy) solution, the not completely 
> correct way, or can I change the NewInteractor interface?

Yes, there is an easy solution you just overlooked. Since the NewLineInteractor knows that it is making a Line inside a SelectionHandles group in the make method, the resize method can simply get the line out of the SelectionHandles group and resize the line itself. Both methods go together so it is fine that the resize method would know what the make method knows. For example:

public void resize (GraphicalObject gobj, int x1, int y1, int x2, int y2) {
	Line r = (Line)(((SelectionHandles)gobj).getChildren().get(0));
	r.setX1(x1);
	...