Thursday, March 31, 2011

what is cool in jboss's jmx-console ?

I like JNDIView that shows me all JNDI's, is there anything usefull beside that in jmx-console?

P.S. I know i could make my own MBeans, but i want to know what default cool actions are available.

From stackoverflow

How to stop further JavaScript events from executing

I'm using an Infragistics grid on a form and am handling multiple client-side events on the cells (e.g. cell click, before cell edit, before cell update, etc).

On the cell click event, I'm spawning a secondary window (successfully) as follows:

 var convChartWindow = window.open("conversioncharts/" + currentCell.getValue() + ".html", "chart", "location=0,0, status=1, scrollbars=1, width=" + screen.width + ", height=175");        
 convChartWindow.focus();

However, the 'convChartWindow' does not maintain the focus, and gets covered with the parent window. None of my other cell events fire after this, but there seems to be some Infragistics JavaScript that runs.

Is there a something I can put after the .focus() above to prevent further javascript from running and keep the focus on the correct window?

Thanks!

From stackoverflow
  • i dont know what infragistics is. ill assume its some framework over which you have no control. if i were you i would try the following:

    1) set a timeout to take focus back to the window after 1000 ms, or set a timeout loop to set focus to the window you need. this is a hack.

    2) figure out what functions fire the events that ruin your life, override those functions in your code and add a flag that will prevent them from doing something evil

    3) last, atleast read this: http://www.quirksmode.org/js/events_order.html there might be something useful for you in there

    Gern Blandston : I went with the timeout and it works. Thanks!
  • Have the child window call the focus itself when it opens.

  • If there is something else in the function that runs the code you gave, try using:

    break;
    

    after:

    convChartWindow.focus();
    
  • Call this:

    // Prevents event bubble up or any usage after this is called.
    // pE - event object
    function StopEvent(pE)
    {
       if (!pE)
         if (window.event)
           pE = window.event;
         else
           return;
      if (pE.cancelBubble != null)
         pE.cancelBubble = true;
      if (pE.stopPropagation)
         pE.stopPropagation();
      if (pE.preventDefault)
         pE.preventDefault();
      if (window.event)
         pE.returnValue = false;
      if (pE.cancel != null)
         pE.cancel = true;
    }  // StopEvent
    

    This was snipped from here: What is equivalent of 'event.returnValue=false' in Firefox

    and was written by Peter Blum
    See: PeterBlum.com

    Gern Blandston : What is the 'pE'/event object that gets sent into this function?
    Bravax : Try passing in this. i.e. StopEvent(this); Otherwise what are the parameters of your Javascript event?
    Gern Blandston : That didn't work. It runs through the code, but doesn't stop the focus changing back. The parameters of the event are just strings of IDs.
    Bravax : Are there any polling or timeout Javascript functions on your page?
    Gern Blandston : Nope. None that I've written. That said, the Infragistics JavaScript might, but I'm not sure why it would.

iphone: How do I get the location of a touch down event?

I have an empty View-Based app. I'd like to do something with the x,y coordinates of every touch down on the screen. How do I do this?

From stackoverflow
  • Override the

    - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
    

    function (from the NSRespondrer class). You can override it from your view class. The 'touches' variable hold a set of 'UITouch' instances that you can query for the location in the view.

  • The correct answer to this question should be RTFM (read the faq meticulously) as it's a basic tenet of the UI for iPhone.

    don't mark this down because I suggest reading the documentation - that should always be the first place to look, not asking a question online :(

    José Joel. : But It's not an answer to his question.
    KevinDTimm : baloney, it is an answer to his question :( give a man a fish, he eats for a day; teach a man to fish, he eats forever

How to read .rej files, i.e.

I'm having trouble applying a patch to my source tree, and it's not the usual -p stripping problem. Patch is able to find the file to patch.

Specifically, my question is how to read / interpret the .rej files patch creates when it fails on a few hunks. Most discussions of patch/diff I have seen don;t include this.

From stackoverflow
  • A simple example:

    $ echo -e "line 1\nline 2\nline 3" > a
    $ sed -e 's/2/b/' <a >b
    $ sed -e 's/2/c/' <a >c
    $ diff a b > ab.diff
    $ patch c < ab.diff
    $ cat c.rej
    ***************
    *** 2
    - line 2
    --- 2 -----
    + line b
    

    As you can see: The old file contains line 2 and the new file should contain line b. However, it actually contains line c (that's not visible in the reject file).

    In fact, the easiest way to resolve such problems is to take the diff fragment from the .diff/.patch file, insert it at the appropriate place in the file to be patched and then compare the code by hand to figure out, what lines actually cause the conflict.

    Or - alternatively: Get the original file (unmodified), patch it and run a three way merge on the file.

    Regards, Bodo

ASP.NET and the new Chart Control

I'm looking through the asp:chart documentation and I don't see where you set a "ChartType" on a ChartArea. How do you make a simple 2D PieChart? I don't see where you set the type at all.

Note: Adapt Gordon's answer to ASP.NET. Close enough to get me on the correct path. Thanks.

From stackoverflow
  • using System.Windows.Forms.DataVisualization.Charting;
    ...
    
    // Populate series data
    double[]    yValues = {65.62, 75.54, 60.45, 34.73, 85.42};
    string[]    xValues = {"France", "Canada", "Germany", "USA", "Italy"};
    chart1.Series["Default"].Points.DataBindXY(xValues, yValues);
    
    // Set Pie chart type
    chart1.Series["Default"].ChartType = SeriesChartType.Pie;
    
    // Set labels style
    chart1.Series["Default"]["PieLabelStyle"] = "Outside";
    
    tyndall : Aha. ChartType is on the Series. -1 cool point for me.

Invoking built-in operators indirectly in Python

Let's say you have a small calculator program that takes numbers and an operator to perform on those numbers as input, then prints out the result of applying the specified operation. So if you input "4 + 5" it will print out "9". Simple, right? Well what I want to be able to write is something this:

a, op, b = raw_input().split()
print somehowInvokeOperator(op, a, b)

The problem is that "somehowInvokeOperator()" part. Is there anyway to do this without resorting to either (a) eval() or (b) some type of dictionary mapping keys like "+" and "-" to functions that perform the appropriate operation? getattr() doesn't appear to work for this. I don't really need this code for anything, I'm just curious to see if this can be solved in Python as elegantly as it can in other dynamic languages.

From stackoverflow
  • If you really wanted to do this, you would need the standard operator module. See also Emulating numeric types. And, yes, a dictionary full of functions would be a perfectly sound dynamic way to make this happen.

    import operator
    operations = {'+' : operator.add}
    result = operations[op](a, b)
    
    joeforker : @grg: see b.__radd__(a)
    hasen j : { '+': lambda a,b: a + b } #so it's all inline
    Miles : {'+': operator.add} # so you're not recreating less efficient versions of functions already in the standard library
    hasen j : you should add a note that this answer used to be entirely different .. at least so the comments make some sense.
  • Basically no, you will at least need to have a dictionary or function to map operator characters to their implementations. It's actually a little more complicated than that, since not all operators take the form a [op] b, so in general you'd need to do a bit of parsing; see http://www.python.org/doc/2.6/library/operator.html for the full list of correspondences, and for the functions you'll probably want to use for the operator implementations.

    If you're only trying to implement the binary arithmetic operators like + - * / % ** then a dictionary should be good enough.

  • Warning: this is not pythonic at all!! (goes against every rule of the Zen of Python!)

    Here's a magical, one-liner dictionary:

    ops = eval( '{%s}'%','.join([('\''+op + '\' : lambda a,b: a ' + op + ' b') for op in '+-*/%']) )
    

    That defines your dictionary .. which you can use

    ops['+'](10,4) #returns 14
    

    the basic idea is mapping each operator to a lambda function:

    { '+' : lambda a,b: a + b }
    
    Bartosz Radaczyński : I love this :D. Do you write your code this way more often?
    hasen j : not really, I never use eval, but in this case it was the only options. This "solution" evolved out of my comment on joeforker's answer, it took me a while to sort it out.

Is there *simple* way to extract deeply nested values with XML::Simple?

I'm using Perl's XML::Simple to parse deeply nested XML and would like to extract a small list of elements about 4 levels down:

A
  B
    C 
      D1
      D2
      D3

Ideally I want to do this on the input step, if possible. Like this:

my @list = XMLin($xml, { SomeAttribute => 'ButWhat?' });

ending up with the same thing as if I did:

@list = ('D1', 'D2', 'D3')

Is is possible? Or just not that 'simple'?

From stackoverflow
  • Assuming your data in memory looks like:

    my $parsed = {
        A => {
            B => {
                C => [ qw/here is your list/ ],
            },
        },
    };
    

    Then you can get your list with my @list = @{ $parsed->{A}{B}{C} }.

    Is this what you are trying to do?

    Edit: taking into account some of the comments, perhaps you want Data::Visitor::Callback. You can then extract all the arrays like:

    my @arrays;
    my $v = Data::Visitor::Callback->new(
        array => sub { push @arrays, $_ },
    );
    $v->visit( $parsed_xml );
    

    After that runs, \@arrays will be a list of references to arbitrarily-deeply nested arrays.

    Finally, if you just have an attribute name and want to search for matching XML nodes, you really want XPath:

    use XML::LibXML;
    my $parser = XML::LibXML->new;
    my $doc = $parser->parse_string( $xml_string );
    
    # yeah, I am naming the variable data.  so there.
    my @data = map { $_->textContent } $doc->findnodes('//p[@id="foo"]');
    

    Anyway, TMTOWTDI. If you are working with XML, and want to do something complicated, XML::Simple is rarely the right answer. I use XML::LibXML for everything, since it's nearly always easier.

    One more thing, you may want Data::DPath. It lets you "XPath" an in-memory perl data structure:

    git-noob : Thanks for the answer. Yes I know I can do this - but I was hoping to not have to test for the existence of all the levels in the hash to access the list.
    brian d foy : The trick there is that you have to know how many levels deep you're going to go before you start.
  • The fact that you're using XML::Simple is irrelevant; you're trying to search a structure of has refs and array refs. Do you know what it is you're searching for? Will it always be in the same place? If so, then something like what jrockway wrote will do the trick easily. If not, then you'll need to walk each piece of the structure until you find what you're looking for.

    One thing I often do is to dump the structure that XML::Simple returns using Data::Dumper, to see what it looks like (if it will always "look" the same; if not, you can dynamically determine how to walk it by testing is something is a ref and what kind of ref it is). The real question is: what are you looking for?

  • Data::Diver provides a nice interface for digging in deep structures.

  • Building on Jon's answer, here's the basic code I use when I need to do this sort of thing. If I need anything fancier, I usually reach for a module if I'm allowed to do that.

    The trick in get_values starts with the top-level reference, gets the next lower level, and puts it in the same variable. It keeps going until I get to where I want to be. Most of the code is just assertions to ensure that things work out right. In most cases I find it's the data that's messed up, not the traversal (but I do lots of data clean-up work). Adjust the error checking for your situation.

    use Carp qw(croak);
    
    my $parsed = {
      A => {
        B => {
          C => [ qw/here is your list/ ],
          D => {
            E =>  [ qw/this is a deeper list/ ],
            },
        },
      },
    };
    
    my @keys = qw( A B C D );
    
    my @values = eval { get_values( $parsed, @keys ) } or die;
    
    $" = " ][ ";
    print "Values are [ @values ]\n";
    
    sub get_values
        {
        my( $hash, @keys ) = @_;
    
        my $v = $hash; # starting reference
    
        foreach my $key ( @keys )
         {
         croak "Value is not a hash ref [at $key!]\n" unless ref $v eq ref {};
         croak "Key $key does not exist!\n" unless exists $v->{$key};
         $v = $v->{$key}; # replace with ref down one level
         }
    
        croak "Value is not an array ref!" unless ref $v eq ref [];
        @$v;
        }
    
  • Thanks for all the suggestions.

    In the end I ducked the problem of traversing the data structure by using an eval block.

    my $xml_tree;
    my @list;
    
    eval {
    
       # just go for it
       my @list = @{ $xml_tree->{A}->{B}->{C} };
    
    };
    
    if ($@) {
       say "oops - xml is not in expected format - and that happens sometimes";
    }
    
    Chris Lutz : I don't think you need that many ->'s - $xml_tree->{A}{B}{C} should work fine.

In sharepoint how can a search be limited to current folder in a doc library?

Hello all,

We have a large document library with 3000+ folders. Our customer wants to be able to search within the current folder. Because this document library has a lot of folders creating one scope per folder is out of question. So the question is: How can a search be limited to current folder in a doc library?

Thanks

From stackoverflow
  • Off hand I would say that you would need to implement a custom search feature and access the Search API directly. More importantly is that you seem to be suffering from a case of FileShareism. I've seen many a SharePoint implementation suffer and die from this affliction.

    Brian Bolton : No need for custom search feature. This is all built in.
    webwires : I would call that more of a work around and not necessarily "built in" but it certainly gets the job done. It all depends on if it satisfies the customers requirements as some organizations will demand a consistent interface for searching.
  • If you want to limit to a document library:

    This is already built in. When you are viewing a document library, the search box in the upper right defaults to "This List: NameOfDocLib". Searching here will limit the scope to the document library.

    If you want to search individual folders:

    This is built in to windows. Use the built in windows explorer search.

    Tell your customers to open the folder in explorer view. Right click on your folder you want to search and select search.

    Don't tell your customer that this was all built in. Take credit for it. Profit! :)

    @webwires I agree about the 3000 folders. You should really think about breaking that out into multiple document libraries.

    Brian Bolton : wow, was my answer really that bad to deserve a -1? I just presented the no-cost solution and I get voted down. strange. :/

Monitor active web connections on IIS 7 in real time (perhaps throttle individual IP's)?

We develop a web app that manages files and resources for different users to download throughout the day on a web server with very limited upstream bandwidth.

Is there any way to monitor in real time how much upstream bandwidth is being taken up by individual connections to IIS (7.0)?

Ideally we'd like way to see a list of each active IIS connection, the KB/s being delivered to each in real time, and the destination IP address.

As a super bonus: Is there any way to individually throttle connections/IP's so that they don't hog all the bandwidth?

From stackoverflow
  • Some prosumer-level software firewalls let you do this. If you configure IIS so that each worker process is easily distinguishable from the others, you can accomplish what you want using software like Net Limiter.

    Matias Nino : This works! Thank you!
  • Have you looked into the Bit Rate Throttling module? It can be used to throttle media and non-media files at specified bit rates.

Are embedded developers more conservative than their desktop brethrens?

I've been in the embedded space for a while now, and it seems that most programmers I talk to seem to be doing things pretty much the same way it was done 15 years or more ago: Waterfall(ish) Development, command line tools and a small group uses lint.

Contrast this with the server/desktop environment, where there seems to be lots of activity related to all sorts of facets of programming:

  • XP, Scrum, Iterative, Lean/Agile
  • Continuous Integration
  • Automated Builds
  • Automated Unit Testing Frameworks
  • Refactoring tool support

Is it just that the embedded environment makes it more difficult to implement new practices or tools?
Is it that the mindset of embedded programmers steers them away from new tools/concepts?
Is it that management in the typical embedded industry behind the curve compared to IT focused fields?

I do realize that this is a generalization, and some embedded projects do use Scrum, Agile, CI, Automated Builds (in fact I worked at a company that had that in place since the 80s). But my impression is that it is a very small percentage.

From stackoverflow
  • We are all used to the fact that our desktop PC crashes once in a while (or at least an application on the desktop suddenly disappears). It's no big deal. The next patch will fix it.

    In the embedded space, you are building something which can't be patched. Lives can depend on your device (in a car, an elevator or a medical system). Most devices are installed and then must run unattended for years. So embedded people tend to be very conservative. TCP/IP is often "too modern". They stick to their trusty serial bus with a communication "stack" that is roughly 50 lines of assembler code.

    What's worse, you simply don't have the abundance of space on the device which means you can't use one of the latest programming languages which make TDD and automated builds a bliss.

    Next, a lot of embedded development environments are proprietary. If your supplier doesn't support it, you won't get it. Linux has started to weaken this in the past years but a whole lot of devices are not powerful enough to run Linux, yet. And even if they were, the CPU power would be used for something else instead of running a fancy OS which comes with source.

    So yes, there are powerful forces working in the background to keep the embedded space where it is.

  • I would say it's more lack of good toolsets. It's really frustrating when you want to use C++ for its compile-time features not present in C (templates, namespaces, object-orientedness, etc) rather than its run-time features (exceptions, virtual functions) -- but the device manufacturers & 3rd parties just give you a C compiler, not C++. This probably results more from market size (hundreds of millions of PCs running Windows, with hundreds of thousands or even millions of developers -- vs. hundreds of thousands of Chip X, with hundreds or low thousands of developers) than from device capability.

    edit: w/r/t robustness: there are different markets out there. The car/elevator/aeronautics/medical device market is going to have to be rigorous about getting rid of bugs. Other markets (toys, MP3 players, & other consumer electronics) can be less rigorous, especially if it's possible to upgrade code in the field. ("Oops! We're sorry we deleted your music library! We just fixed that bug, you can grab the latest release at our website at your convenience!")

  • These some reasons I can think of:

    • Embedded teams are usually smaller that desktop/Web teams. Code base is smaller.
    • System testing is much more important than unit testing. The software needs to be tested together with hardware. Automated testing is not possible and can only be applied to a small fraction of the code base.
    • Embedded engineers have a different skill set than software engineers. They interact with hardware, know how to use an oscilloscope and a logic analyzer. Usually, the difficult part of their job is to find a glitch in the hardware. They do not have the time to adopt modern software methologies.
    Dunk : I was all set to give a +1 until you had to mention they do not have time to adopt modern software methodologies. Also, the code base is definately not smaller. I've been on 60 software developer teams through one developer teams.
    kgiannakakis : Compare teams of 60 developers with teams of thousands for desktop-web applications. In the average, embedded software teams are smaller. Also, I didn't meant that is not of any merit to adopt modern software methologies, but that sometimes it is too much to ask that from them.
  • Embedded programmers are mostly electrical engineers, not computer scientists or software engineers.

    They excel in their field of expertise. They bring a slower more methodical approach than most computer programmers. When it comes to programming firmware, electrical engineers know just enough to be dangerous.

    Here are some of the things I've noticed electrical engineers doing in C:

    • All code in ONE single file
    • Math like variable names: x, y, z
    • No or missing indentation
    • No stardard comment headers
    • No comments at all
    • Too many comments

    In their defense EE's didn't train to be computer programmers, it's not their job. I think software is the hardest part of creating embedded devices. Designing PCBs and choosing components requires skill but pales in comparison to the complexity of 10,000 lines of code.

    Embedded programmers also have to deal IDE's that look and behave like the IDE's of the 90's.

    Judge Maygarden : +1: I just took a job writing firmware for a company that only had straight EE types. Yikes! I just finished a complete re-write. ;)
    Dunk : Your experience that most embedded programmers are electrical engineers is different from mine. Yes there are some, but I have mostly been involved with Computer Engineers and occasionally Computer Science grads. Maybe DoD contracts require a bit more Software discipline than the commercial world.
    Sean : +1: I got my bachelors in EE last May but got really interested in SW, and embedded systems was a good place for me. The IDE's of the 90's comment made me laugh. If I could use VS for embedded work that would be great. I'll try to watch out for your list!
    Justin Tanner : Sean, if your on stackoverflow that means you're probably not one of the EE's I talked about!
  • Is it just that the embedded environment makes it more difficult to implement new practices or tools?

    It's partly a matter of scale. Software is NOT the product, the product is the product. however, there are thousands of different types of microcontrollers and microprocessors out there, and the most popular thousand have 3-4 different compilers that aren't completely compatible.

    So a given tool is only going to be used by a few hundred or thousand engineers.

    In windows development, however, there are millions of programmers of many levels - the tools produce software directly which is the product, and so it's going to get more eyeballs, and more money.

    Each new product that an engineer puts out might have a different processor.

    Is it that the mindset of embedded programmers steers them away from new tools/concepts?

    Embedded programmers are generally software or firmware engineers, as opposed to programmers. Engineering implies a certain amount of design, design analysis, and design proof prior to implementation - in other words a ton of work is done before the first line of code is written, and the documentation, ideally, is specific enough that implementation is merely turning pseudocode like documentation into compilable code.

    New tools and concepts are needed in the design phase, not the implementation phase. An IDE with intellisense may be nice, but by the time the code is being written it's useless cruft - they already know what they need.

    CAD - computer aided design - tools are being developed for firmware engineers that are used in the design phase to develop models and simulations that are directly turned into code. Matlab and simulink are good examples of this. The system as a whole is designed.

    In fact, one might wonder why software developers are still writing code while the engineers are making data/program flow charts and state machine diagrams. Why is UML uptake so slow in the application world? It sounds like application developers can use some of the tools in common use among embedded systems engineers...

    Is it that management in the typical embedded industry behind the curve compared to IT focused fields?

    Actually, it's likely the reverse. When a project starts the engineers have to pick the processor.

    The processor manufacturers get less money on older chips, so they pitch the latest and greatest, and they are generally cheaper overall than the chips used in the previous design (either by die shrinks, more integration, etc).

    So the design is actually using the latest and greatest chips.

    The downside is that the compiler and tools are often immature. They can only build so much on the older tools, and since the target moves with each new processor, they can't focus on a lot of the nice features application programmers might like. Especially since many of those features won't be useful to an embedded engineer.

    There are many other factors, some of which are enumerated by other answers, but it's really a different field even though they both involve programming.

  • I'd say different sorts of problem environments.

    The biggest problem with the waterfall methodology is that requirements change. In every environment I've been in, there has been at least the likelihood of a requirements change, which means that the successful methodologies are those that keep flexibility as long as possible. Even if the customer has signed off in blood, and stands to forfeit his left hand if he suggests a change, there are changes coming in the future.

    In embedded programming, it is possible to nail the requirements down up front. They come from the behavior of the system as a whole, and engineers are good at nailing down system requirements. Nobody's going to come in halfway through and say that the user now wants the pacemaker to deliver syncopated impulses while the recipient is dancing.

    Once the requirements are frozen beyond thawing, which never happens in software designed for human use, waterfall is a very efficient methodology. The team proceeds from well-specified requirements to overall design, then detailed design, then coding, verifying all the way that the stages are done correctly. Then it's time to debug the code (since it's never perfect when written), and final tests to make sure the code meets the requirements.

  • Are embedded developers more conservative than their desktop brethrens?

    Yes, because they are more concerned with the consequences of making errors. It’s a big deal to patch an embedded device. Not so much for a desktop app.

    Waterfallish development is necessary in the embedded world because you are generally building hardware at the same time as the software. You need to know as soon as possible how much memory, how much processor speed, how big a flash, what if any special hardware is necessary etc...The hardware design can’t complete until you know these answers. Once you decide, that is pretty much it. The lead time for redoing a board is far too long. If you mess up then the software is going to have to work around any short-comings. Not usually an ideal situation.

    As for the tools, that is largely based on what the supplier provides and any biases of the developers. On some projects I have used XP Embedded and got pretty much everything that the desktop developer gets.

    XP, Scrum, Iterative, Lean/Agile:

    Since most of the design is done up front (by necessity), and you usually don’t have working hardware when it is time to code, the quick turn-around processes don’t really provide much benefit.

    Continuous Integration/Automated Builds Nice to have, but not really necessary. What…it takes about 15 seconds to open the IDE and press the compile button.

    Automated Unit Testing

    No reason why this shouldn't be done, but only part of the code can truly be automatically tested because the other part is either hardware dependent or has some other dependencies like timing. So you can't really tell if the code is working by the automated tests.

    Refactoring Tool Support

    The vendors of embedded processors product is the processor. They provide the IDE support in order to encourage you to purchase their processor. They couldn't possibly afford to pay for a Visual Studio sized development team in order to add all the bells and whistles to the IDE which isn't even their product.

  • I would also add a couple of points here:

    • In general embedded projects tend to be smaller than desktop projects. This decreases the need for very elaborated software processes.
    • Requirements for embedded project are often precise and better defined. Therefore SCRUM and agile are not so crucial
    • Finally embedded projects are generally a mix of software and hardware. The software being only a part of the project embedded developpers invest less time in software processes
    Dunk : "This decreases the need for very elaborated software processes" I've found the opposite to be true. Mistakes on software only projects are far less costly to fix than having to redesign and remanufacture hardware. Embedded systems require much more rigor.
    Dunk : "Requirements for embedded project are often precise and better defined". In the beginning they are no better defined than a software only project. Because hardware needs to be built that can do the job, more analysis is done up front to get these right.
    Dunk : "embedded developpers invest less time in software processes" IME, there are hardware developers and software developers. Why would the software developers invest less time in the software process?

What's the best way to set up data access for an ASP.NET MVC project?

I am starting a new ASP.NET MVC project to learn with, and am wondering what's the optimal way to set up the project(s) to connect to a SQL server for the data. For example lets pretend we have a Product table and a product object I want to use to populate data in my view.

I know somewhere in here I should have an interface that gets implemented, etc but I can't wrap my mind around it today :-(

EDIT: Right now (ie: the current, poorly coded version of this app) I am just using plain old SQL server(2000 even) using only stored procedures for data access, but I would not be adverse to adding in an extra layer of flexability for using linq to sql or something.

EDIT #2: One thing I wanted to add was this: I will be writing this against a V1 of the database, and I will need to be able to let our DBA re-work the database and give me a V2 later, so it would be nice to only really have to change a few small things that are not provided via the database now that will be later. Rather than having to re-write a whole new DAL.

From stackoverflow
  • In my site's solution, I have the MVC web application project and a "common" project that contains my POCOs (plain ol' C# objects), business managers and data access layers.

    The DAL classes are tied to SQL Server (I didn't abstract them out) and return POCOs to the business managers that I call from my controllers in the MVC project.

    Ryan Skarin : I guess maybe some of the issue I'm trying to get an understanding on is what would the difference be between your objects in the common projects VS objects that should go in the model folder? Or does this common project replace stuffing things into models?
    muloh : I could be completely wrong (I'm new to MVC), but I'd use the Models folder to create mash-ups of my "common" objects for view-specific use.
    Ryan Skarin : So you'd basically extend your common objects in the models folder if they needed to do something not provided in your basic common object?
    muloh : If I need an object that's specific to the MVC app, I put it in the Models folder. However, I found this quote on asp.net which is making me think I'm wrong: "The model should contain all of your application business logic and database access logic." http://www.asp.net/learn/mvc/tutorial-02-cs.aspx
    Ryan Skarin : Maybe I am thinking too much about separating things into separate projects when separate cs files is really all that I should be doing.
  • It really depends on which data access technology you're using. If you're using Linq To Sql, you might want to abstract away the data access behind some sort of "repository" interface, such as an IProductRepository. The main appeal for this is that you can change out the specific data access implementation at any time (such as when writing unit tests).

    I've tried to cover some of this here:

    Ryan Skarin : Can't open that link....is your site down?
  • I would check out Rob Conery's videos on his creation of an MVC store front. The series can be found here: MVC Store Front Series

    This series dives into all sorts of design related subjects as well as coding/testing practies to use with MVC and other projects.

  • For our application I plan on using LINQ to Entities, but as it's new to me there is the possiblity that I will want to replace this in the future if it doesn't perform as I would like and use something else like LINQ to SQL or NHibernate, so I'll be abstracting the data access objects into an abstract factory so that the implementation is hidden from the applicaiton.

    How you do it is up to you, as long as you choose a proven and well know design pattern for implementation I think your final product will be well supported and robust.

  • Check out the Code Camp Server for a good reference application that does this very thing and as @haacked stated abstract that goo away to keep'em separated (thx OffSpring).

  • I think that Billy McCafferty's S#arp Architecture is a quite nice example of using ASP.NET MVC with a data access layer (using NHibernate as default), dependency injection (Ninject atm, but there are plans to support the CommonServiceLocator) and test-driven development. The framework is still in development, but I consider it quite good and stable. As of the current release, there should be few breaking changes until there is a final release, so coding against it should be okay.

  • Use LINQ. Create a LINQ to SQL file and drag and drop all the tables and views you need. Then when you call your model all of your CRUD level stuff is created for you automagically.

    LINQ is the best thing I have seen in a long long time. Here are some simple samples for grabbing data from Scott Gu's blog.

    LINQ Tutorial

  • I just did my first MVC project and I used a Service-Repository design pattern. There is a good bit of information about it on the net right now. It made my transition from Linq->Sql to Entity Framework effortless. If you think you're going to be changing a lot put in the little extra effort to use Interfaces.

    I recommend Entity Framework for your DAL/Repository.

  • I have done a few MVC applications and I have found a structure that works very nicely for me. It is based upon Rob Conery's MVC Storefront Series that JPrescottSanders mentioned (although the link he posted is wrong).

    So here goes - I usually try to restrict my controllers to only contain view logic. This includes retrieving data to pass on to the views and mapping from data passed back from the view to the domain model. The key is to try and keep business logic out of this layer.

    To this end I usually end up with 3 layers in my application. The first is the presentation layer - the controllers. The second is the service layer - this layer is responsible for executing complex queries as well as things like validation. The third layer is the repository layer - this layer is responsible for all access to the database.

    So in your products example, this would mean that you would have a ProductRepository with methods such as GetProducts() and SaveProduct(Product product). You would also have a ProductService (which depends on the ProductRepository) with methods such as GetProductsForUser(User user), GetProductsWithCategory(Category category) and SaveProduct(Product product). Things like validation would also happen here. Finally your controller would depend on your service layer for retrieving and storing products.

    You can get away with skipping the service layer but you will usually find that your controllers get very fat and tend to do too much. I have tried this architecture quite a few times and it tends to work quite nicely, especially since it supports TDD and automated testing very well.

sql sp year or year and month

Hi,

I have some SP that takes year and month:

Create PROCEDURE Report( @targetYear int, @targetMonth int )

select sum(col) where year(dateTime) = @targetYear and month(dateTime) = @targetMonth Then I have the same thing for year only

Create PROCEDURE Report( @targetYear int )

select sum(col) where year(dateTime) = @targetYear Of course, the logic is more complicated than sum(col)

My question is, how can I write this SP so the logic is not repeated across the two SP, even if it means passing 0 for target month when I mean the whole year?

From stackoverflow
  • SELECT sum(col) 
    FROM [yourtable]
    WHERE year(dateTime) = @TargetYear 
        AND (@targetMonth < 0 OR month(dateTime) = @targetMonth)
    
  • I like Joel's answer except his won't work if you pass in a zero as the month. You will want @targetMonth=0 for example:

    SELECT sum(col) 
    WHERE year(dateTime) = @TargetYear AND 
    (@targetMonth = 0 OR month(dateTime) = @targetMonth)
    
  • The stored proc can have an optional parameter:

    Create PROCEDURE Report( @targetYear int, @targetMonth int = null )
    

    It can be called either with the parameter or not:

    exec Report 2009
    exec Report 2009, 2
    

    Then in your logic check for a null:

    SELECT sum(col) 
    FROM [yourtable]
    WHERE (year(dateTime) = @TargetYear and @targetMonth = null)
      OR (year(dateTime) = @TargetYear AND @targetMonth = month(dateTime))
    

    or

    SELECT sum(col) 
    FROM [yourtable]
    WHERE year(dateTime) = @TargetYear 
      AND (@targetMonth = null OR year(dateTime) = @TargetYear)
    

Does gcc work when the wrapper is a different version than the platform-specific binary?

The way I understand gcc, /usr/bin/gcc (and other bits related to gcc, like ld) is a small wrapper that delegates to a platform-specific binary somewhere else on the system.

So does compilation still work correctly if you have a cross compiler that is a couple of versions behind /usr/bin/gcc?

From stackoverflow
  • Yes, the whole Idea is to allow gcc to be installed in different versions and for different target platforms (in any combination) to be installed in parallel.

    /usr/bin/gcc just uses fork+exec to call the actual compiler. The command line arguments given to gcc are just passed to the actual compiler with two exceptions: -V and -b. The latter selects the target platform the former the version of the compiler.

  • You won't use /usr/bin/gcc to cross-compile. Instead you'll install another compiler in another prefix. For instance if you're on debian/ubuntu you can install a ming (win32) cross-compiler by doing:

    apt-get install mingw32

    Which will work perfectly fine side by side with the normal gcc.

Advantages of SQL Server Enterprise vs. Standard (2008)

I'm looking into running a single instance web application on SQL Server 2008. What are the specific advantages that the Enterprise version has over the Standard version in terms of speed. I'm not looking at the management and reporting side of things, which I understand Enterprise is much better at, but just at raw speed point of view. Based on the information found here, I've only been able to find a few differences.

  1. Parallel index operations - Only matters when you're creating or altering indexes. Does nothing to effect day to day speed.

  2. Table and Index Partitioning - Does this really make a difference if everything is on the same disk/raid array?

  3. Limited to 4 CPUs - I understand this is number of physical processor sockets, not cores, and I don't plan on needing a server ever that has more than 4 sockets. If I do, the extra cost of SQL Server Enterprise is going to be negligible. Or I could just use replication with multiple machines, which Standard edition also supports.

Basically, what it comes down to, is Enterprise really worth the extra cash ($8487 + CAL vs. $885 + CAL), for a single instance web application?

From stackoverflow
  • I don't believe the CAL license permits you to sit SQL Server behind a web server application. You must purchase a per-processor license to do that. Even if not, if you have more than 25 users/devices, the per-processor model is less costly (you need a CAL for each user or each device connected).

    Per-processor pricing for Enterprise is $24,999 and for Standard it's $5,999. You might also look into Web edition, which is $15/mo. per processor.

    In answer to your question, there is some stuff like indexed views, but if you have less than 10,000 users total, your schema and query design will have way more of an impact on performance than any features not included in standard or web edition.

    Kibbee : The way I undestand it, Each web server that's accessing the database can be give a single client access license. A per processor license is used in situations where you want to have a large number (hundreds?) of individual machines/users access the database at the same time.
    John Rudy : @Kibbee: Look under "multiplexing" on this page: http://www.microsoft.com/sqlserver/2005/en/us/Special-Considerations.aspx. That basically indicates that you either need CALs for the web site users (not realistic) or a proc license.
    Robert C. Barth : Exactly. Kibbee, you are incorrect. You would circumvent the entire licensing structure if you could just always build an app in front of the database and call it one CAL. Besides the fact that the language of the CAL says you can't do that.
    Robert C. Barth : FYI, the device license is for when you have an application that runs in a call center, and multiple users per machine would be using the SQL Server. The regular CAL is for single user desktops accessing the SQL Server. Proc licenses are for everything else (web apps, etc.)
  • Unless you're going to have huge databases, no, it's not worth it, spend the extra money on tuning the database design.

    Kibbee : Define "huge".
    Robert C. Barth : 100's of millions of rows.
    SqlACID : 100GB+ or more than 1000+ users connecting at once.
  • In short - given you're worrying about performances I don't think the extra-cash is worth it unless you need extensive use of full mirroring and the other extras you mention.

    Following links might be helpful:

    SQLServer2008 Standard VS Workgroup

    Differences between SQLServer2008 editions - similar to the one you have there

  • You really need to use the Processor licensing. There are more complicated issues regarding the licensing than you might think.

Google street view URL question:

Hi I have the address of a property and my application can launch a browser to go to http://maps.google.com?q=searchStringHere. If a good match is found it will take it directly there. Is there any thing I can append to the url to make it switch to streetview without having the exact coordinates? I dont't want to code any javascript or flash.

From stackoverflow
  • Unfortunately not - there's no simple answer, based on the address.

    Firstly, the list of parameters for the Google Maps site is documented here, so you can use that as your starting point.

    The easy part is that you need to select the streetview layer "&layer=c".

    However, before anything will display in that layer, you need to specify where your view is. You set the position by the latitude and longitude in cbll and the angle of the camera with some options in cbp.

    To get the latitude and longitude from the address, you need to use a geocoding service, like the google maps api.

    However, this will only get you a street view close to the address. In addition to knowing where the street view needs to be from, you also need to know which angle to point the camera at - this will be different for every address, depending on where the nearest point the StreetView camera took a photo from was, so it's not easy to do automatically (with any information that I know is available...)

    Tim Matthews : Thanks that site was what I was looking for. The problem is that streetview requires exact coordinates. The sollution was to first call it with output=kml, it sends back a simple xml then re call it with cbll= ...
    Stobor : Glad that worked out for you. As I mentioned, you still need to get the angles for the cbp variable, but at least you'll get the right spot. At a minimum "&cbp=12,,,," gets you something to see, though.
  • Building a Google Street View URL

    Basic Google Map URL http://maps.google.com/maps?q=

    q= Query - anything passed in this parameter is treated as if it had been typed into the query box on the maps.google.com page.

    Basic url to display GPS cords location

    http://maps.google.com/maps?q=31.33519,-89.28720

    http://maps.google.com/maps?q=&layer=c

    layer= Activates overlays. Current options are "t" traffic, "c" street view. Append (e.g. layer=tc) for simultaneous.

    http://maps.google.com/maps?q=&layer=c&cbll=

    cbll= Latitude,longitude for Street View

    http://maps.google.com/maps?q=&layer=c&cbll=31.33519,-89.28720

    http://maps.google.com/maps?q=&layer=c&cbll=31.335198,-89.287204&cbp=

    cbp= Street View window that accepts 5 parameters:

    1. Street View/map arrangement, 11=upper half Street View and lower half map, 12=mostly Street View with corner map

    2. Rotation angle/bearing (in degrees)

    3. Tilt angle, -90 (straight up) to 90 (straight down)

    4. Zoom level, 0-2

    5. Pitch (in degrees) -90 (straight up) to 90 (straight down), default 5

    The one below is: (11) upper half Street View and lower half map, (0) Facing North, (0) Straight Ahead, (0) Normal Zoom, (0) Pitch of 0

    This one works as is, just change the cords and if you want to face a different direction (the 0 after 11) http://maps.google.com/maps?q=&layer=c&cbll=31.335198,-89.287204&cbp=11,0,0,0,0

    For more Google Street View code interpertation

  • Anyone know how to calculate what the cbll and cbp is for a property address?

    Stobor : cbp, no - you have to either guess or do it manually. cbll, though can be found using a geocoding service. See my answer for links.
  • You can get the values by pressing the link button at the top of the street view.

  • DUDE THANK YOU!!!

Cannot get file data from the clipboard using Flex

Given: A Flex TileList with the following event:

<mx:nativeDragDrop>
  <![CDATA[
    if(event.clipboard.hasFormat(ClipboardFormats.FILE_LIST_FORMAT)) {
      var files:Array = event.clipboard.getData(ClipboardFormats.FILE_LIST_FORMAT) as Array;

      for each(var file:File in files)
      {
        // file.data is null here!
      }

      this.listData.refresh();
    }
  ]]>
</mx:nativeDragDrop>

I am trying to create a list of thumbnails from jpegs that I drag into this TileList. Image.source can use the url to show the image, but I need to scale the image down first (hi rez photos) I already have the scaling part done except that I need BitmapData from the file and it has null for file.data.

ALSO, I have tried this:

var x:URLRequest = new URLRequest(value.file.url); // this is a local file (e.g. file:///C:/somefile.jpg)
var b:Bitmap = new Bitmap(x.data as BitmapData);

data is ALSO null! So frustrating. Any help would be appreciated.

From stackoverflow
  • I assume this is a part of an AIR application. (Accessing the clipboard from a plain Flex app is not possible.)

    I have no experience with AIR, but your second code block is clearly wrong. An URLRequest instance does nothing in itself, it is but a static object storing the request details. In order to fetch the data from that URL, you need to create a Loader, and pass the request to that loader like this:

    var req:URLRequest = new URLRequest(value.file.url); // this is a local file (e.g. file:///C:/somefile.jpg)
    var ldr:Loader = new Loader();
    ldr.addEventListener(Event.COMPLETE, function(event:Event):void {
       var b:Bitmap = event.target.content as Bitmap;
    });
    ldr.load(req);
    

    Of course, you'd have to fill in the Event.COMPLETE handler. Note that the Loader class can be used to load SWF and image objects, for all everything else, you'd have to use URLLoader and parse the data yourself.

    Regarding the nativeDragDrop block, here's a snippet from the documentation:

    Typically a handler for the nativeDragEnter or nativeDragOver event evaluates the data being dragged, along with the drag actions allowed, to determine whether an interactive object can accept a drop. To specify that an interactive object is an eligible target, the event handler must call the NativeDragManager.acceptDrop() function, passing in a reference to the object. If the user releases the mouse button over the designated object, the object becomes the drop target and dispatches the nativeDragDrop event.

    Are you calling NativeDragManager.acceptDrop() properly?

    DJTripleThreat : I'm not having issues with the drg/drp functionality. It's the fact that I cant get data from a file by accessing it. Thx for the URLRequest tip. Right after I posted this, I also tried: var x:File = new File(value.file.url); // x.data still null! Could I use the Loader class in this case too?
    DJTripleThreat : ok it really comes down to this. Opening the file works. I can view information about the file. However, this is still breaking: var x:File = new File(value.file.nativePath); x.load(); var b:Bitmap = x.data as Bitmap; // x.data is still null!
    David Hanak : Like I said, I'm not an AIR programmer, but by looking at the specs, I see that here again, you'll have to add an Event.COMPLETE listener, and read the data only in the listener. Also, I'm don't think that you can cast data, a ByteArray attribute, into a Bitmap directly.
    David Hanak : Looking at your original code again, it is very possible that in the nativeDragDrop handler, you also need to issue a load() call on the received file object before accessing its data attribute.

Compile Views in ASP.NET MVC

I want an msbuild task to compile the views so I can see if there are compile time errors at well... compile time. Any ideas?

From stackoverflow
  • You can use aspnet_compiler for this:

    C:\Windows\Microsoft.NET\Framework\v2.0.50727\aspnet_compiler -v /Virtual/Application/Path/Or/Path/In/IIS/Metabase -p C:\Path\To\Your\WebProject -f -errorstack C:\Where\To\Put\Compiled\Site

    where "/Virtual/Application/Path/Or/Path/In/IIS/Metabase" is something like this: "/MyApp" or "/lm/w3svc2/1/root/"

    Also there is a AspNetCompiler Task on MSDN, showing how to integrate aspnet_compiler with MSBuild:

    <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
        <Target Name="PrecompileWeb">
            <AspNetCompiler
                VirtualPath="/MyWebSite"
                PhysicalPath="c:\inetpub\wwwroot\MyWebSite\"
                TargetPath="c:\precompiledweb\MyWebSite\"
                Force="true"
                Debug="true"
            />
        </Target>
    </Project>
    
    JarrettV : This is out of date, see an excerpt from the readme doc below.
    Andrew Bullock : +1 for giving me the command line
  • Next release of ASP.NET MVC (available in January or so) should have MSBuild task that compiles views, so you might want to wait.

    See announcement

    Haacked : I was just going to say that! :)
  • Also, if you use Resharper, you can active Solution Wide Analysis and it will detect any compiler errors you might have in aspx files. That is what we do...

    mookid8000 : It's true it works for aspx files, but the solution-wide analysis does not include ascx files (user controls)
    Andrew : I believe it does in R# 5, but it's a huge resource hog for large projects (even on my 16GB home machine it's not worth using).
    Drew Noakes : @Andrew / @mookid8000 -- R# will also catch errors that the compiler won't, such as missing/incorrect views and actions. R# will slow your PC down a bit (I find it fine on a large-ish project with 4GB ram and a hyperthreaded CPU) but I easily make back the time I spend waiting for it, and I end up doing fewer operations on my code as R# provides higher level operations that batch together the many steps I'd have to take to achieve the same task manually. Your project must be huge!
  • From the readme word doc for RC1 (not indexed by google)

    ASP.NET Compiler Post-Build Step

    Currently, errors within a view file are not detected until run time. To let you detect these errors at compile time, ASP.NET MVC projects now include an MvcBuildViews property, which is disabled by default. To enable this property, open the project file and set the MvcBuildViews property to true, as shown in the following example:

    <Project ToolsVersion="3.5" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
      <PropertyGroup>
        <MvcBuildViews>true</MvcBuildViews>
      </PropertyGroup>
    

    Note Enabling this feature adds some overhead to the build time.

    You can update projects that were created with previous releases of MVC to include build-time validation of views by performing the following steps:

    1. Open the project file in a text editor.
    2. Add the following element under the top-most <PropertyGroup> element: <MvcBuildViews>true</MvcBuildViews>
    3. At the end of the project file, uncomment the <Target Name="AfterBuild"> element and modify it to match the following

      <Target Name="AfterBuild" Condition="'$(MvcBuildViews)'=='true'"> <AspNetCompiler VirtualPath="temp" PhysicalPath="$(ProjectDir)..\$(ProjectName)" /> </Target>

    Adrian Grigore : If this should not work for your project, check if there isn't an false somewhere in your project file. It was overriding the new element I added on top of it.
    mxmissile : Any way to get this to work with Web Forms?
    Zhaph - Ben Duguid : @mxmissile: Scott Guthrie recommended adding a Web Deployment Project to your solution to get this sort of support in Web Application Projects: http://weblogs.asp.net/scottgu/archive/2006/09/22/Tip_2F00_Trick_3A00_-Optimizing-ASP.NET-2.0-Web-Project-Build-Performance-with-VS-2005.aspx
    Carl Hörberg : Make sure that EnableUpdateable is set to false or else the views wont be precompiled. false true (http://devcarl.posterous.com/dont-combine-enableupdateable-and-mvcbuildvie)