Friday, May 6, 2011

What exactly is CPU Time in task manager?

I have some WCF services that are hosted in a windows service. Yesterday I looked at Task Manager and noticed that the CPU time for my windows service process was over 5 hours, while the majority of all other processes were at 0.

What does that mean?

Should I be concerned that the CPU Time was 5+ hours?

From stackoverflow
  • CPU Time is a reflection of how much time your program spends executing instructions in the CPU vs waiting for IO or other resources. Should you be concerned with it being 5+ hours?

    I would guess not, some things to consider are:

    1. How long has this process been running?

    2. Do you have any issues with the performance of the process or other processes on the box that this might be stealing CPU from?

    3. What other processes do you have? Are these active processes that you'd expect to use the CPU? For reference of the 80 processes I have about 20 have over 1 second of CPU time used.

    Edit

    It is possible that the WCF service is stealing CPU from the other services, you need to monitor them making sure their performance is what you expect. You can also get a sense based on Total CPU usage. If you for example you only see 25% of your CPU used, then your other services should not be affected; however, if your running above 75% then they might be affected.

    When it comes to monitoring be sure to monitor over time so you can see how the performance trends, and it will help you isolate problems. For example you service is running fine but then after a deploy it slowly starts to take more and more CPU (Assume 10% a week). Unless your trending your CPU usage you might one day wake up and see your service running slowly which could be weeks after a deploy.

    Grant Wagner : You had 5 hours of CPU time on a process that has been running 504 hours (3 weeks * 7 days/week * 24 hours/day). So simplistically, on average, your process has been using 1% of the CPU the entire time. In reality of course, that isn't the case, there are times your service is using near 0% CPU and other times it is using considerably more. I would say it is nothing to worry about, but if you are concerned, you should use 'perfmon' to track your processes CPU usage over time to determine if you have a problem.
  • CPU time is an indication of how much processing time that the process has used since the process has started (in Windows: link to a Technet article.)

    It is basically calculated by:

    CPU Time of Process = Process Uptime * CPU Utilization of Process
    

    For example, if the process has been running for 5 hours and the CPU time is 5 hours, then that means that the process has been utilizing 100% of the resources of the CPU. This may either be a good or bad thing depending on whether you want to keep resource consumption low, or want to utilize the entire power of the system.

    If the process was using 50% of the CPU's resource and running for 10 hours, then the CPU time will be 5 hours.

  • If you are concerned about how much CPU time your process is using, you should use perfmon to track your processes' CPU usage over an extended period of time to determine if you have a problem.

How to integrate FxCop and VS 2008?

If this is duplicated question, please point me to the proper link and I'll delete this question.

I know that in VS Team System I can use Code Analysis but I'm using VS Professional.

Can you tell me how I can integrate FxCop and Visual Studio? I don't want to add FxCopCmd.exe to my Post-build events to run FxCop with every compilation. I want to be able to run FxCop when I choose by right clicking on the project in Solution Explorer.

Thanks for your help.

From stackoverflow
  • How about setting up FxCop as an external tool in Visual Studio? Here's the link:

    http://msdn.microsoft.com/en-us/library/bb429389(VS.80).aspx

    Vadim : +1 for giving the link. The information in the link doesn't work but it gave me an idea how to solve this problem. Thanks.
  • It took awhile by I finally figure it out. It's not ideal but it works.

    Update: I create a post with step by step instructions:

    Thanks to aamit, who provided the link that put me on the right track even that solution in MSDN article doesn't work. Give him +1; he deserves it.

    1.) In FxCop GUI save your project.

    IMPORTANT:

    • a. Save project in the same directory where your solution is.
    • b. Give the FxCop project name the same as your solution name and include the .sln extension.

    For example: If your solution name is MySolution.sln, the FxCop project name is going to be MySolution.sln.FxCop.

    2.) In Visual Studio select Tools -> External Toos

    3.) Enter following information in External Tools dialog box:

    • Title: FxCop
    • Command: C:\Program Files\Microsoft FxCop 1.36\FxCopCmd.exe
    • Arguments: /c /p:"$(SolutionDir)\$(SolutionFileName).fxcop" /cXsl:"C:\Program Files\Microsoft FxCop 1.36\Xml\VSConsoleOutput.xsl"
    • Initial directory: C:\Program Files\Microsoft FxCop 1.36

    Make sure that "Use Output window" checkbox is checked.

    That's it. It works for me I hope it's going to work for you.

  • I run a command very similar Vadim's as a Post-Build event for the project. That way, I get the FxCop errors as soon as I recompile. But our commands are pretty much the same; nice to know at least two people in the world reached the same conclusion!

    The errors do show up in Visual Studo's Error List pane.

    "%ProgramFiles%\Microsoft FxCop 1.36\FxCopCmd.exe"
    /file:"$(TargetPath)" 
    /console
    /dictionary:"$(SolutionDir)Res\FxCop\CustomDictionary.xml"
    /searchgac 
    /ignoregeneratedcode
    

    (You can omit the /dictionary argument; it just points to one of my custom dictionary files since FxCop flags a few variable names as Hungarian notation when they aren't. I also line-wrapped the command for readability.)

    It does make the build a little longer, but it's nice to see the errors right away and to tailor the command settings per project. Good luck!

Cannot find the right <div> with selector after loading the HTML

I hope I can find a solution to this one. Here it goes:

After I have submitted my form to add a message to the database, I create a <div> with the load method. After that I do some stuff in the callback function.

When the function to create the new <div> has finished, it returns to the calling function that is supposed to prepend the message to the last inserted <div>.

That's where the problem starts:

I use the selector to find $(someDiv : last), but it puts it in the wrong <div>. So, I know it has something to do with the async process.

I haven't really worked out what I could do to fix this, because now I am not working from a callback function.

If you wanna see a live example The sourcecode is clutterd with debug code in the form of alerts, but you can see what it is doing.

From stackoverflow
  • Probably the new div is not being inserted where you think it is. $(someDiv : last) is returning the wrong result because the new div isn't in someDiv, or it isn't at the end. This seems like the kind of thing that Firebug would make easy to debug.

    Rather than using the "last" metaclass to select the div, here's a better idea: give the new div an ID and refer to it directly. Or give it a class when you create it, use the class to select it $(".newdiv"), and then clear the class when you're done. Or simply return the new div to the calling function, so it doesn't need to use a selector at all. Sorry, I didn't entirely understand your situation, but I think at least one of these solutions will work.

  • Did you notice that the first time the value gets added to the first item, and after that it gets added to the previously last item? That might be a clue. Oh, and I got stuck in a loop which added my message 10 times. Looks like your code has timing issues.

    Edit: bringing it back to its basics would look something like this: http://kruisit.nl/so/841982/ Hope that helps.

    Jeroen : You're using IE?
  • Ajax calls are asynchronous, you should be using a callback function to select loaded data, for example:

    $('div').load(url, data, function(data, status) {
       $(someDiv:last).dosomething();
    }
    
    Nadia Alramli : I'm glad you found a way around it

Database-Generated Human-Friendly Codes

I'd like to create some human-friendly codes to identify my objects.

I'm thinking about using the following rules:

  • 6-digit random number
  • the first character is not zero
  • each code has an edit distance value of 2 or greater* from every other such code
  • maybe a checksum too

I'd like my MS SQL database to enforce that the codes I use are not only each unique, but also conform to the above criteria too.

How would I write a database check constraint to enforce rules such as these?

How could I make the database use such numbers as default values for inserted rows?

*so a single keystroke typo won't retreive a different-than-intended record

From stackoverflow
  • Use a stored procedure to check whatever constraints you would like for inserted data.

    A stored procedure can also be used to generate these things if needed.

    Although letting the database do all this for you seems like a nice thing, doing it all in code can be easier to maintain in the long run(unless you have dedicated DBAs who love maintaining this kind of stuff).

    I think your idea and algorithm is smart. If you're going to go that far with the requirements, I would say a checksum is a great thing to have. The checksum alone can catch typo errors, regardless of edit distance.

    Zack Peterson : You're right. With a simple single-digit MOD 10 checksum added to a simple unique number, I'd end up with edit distances of 2 or greater.
  • Create a stored proc that calculates your numeric value; use that stored proc as the DEFAULT() value for the column definition in your table definition. Note: I haven't tried this, so I don't know if it's completely possible.

  • How many id's do you need?

    You could declare the column as an identity, and set the start value to 100000, and the increment to 12. That would produce a six digit number, with edit distance of 2.

    Also, as a bonus, this is pretty fast. But you may run out of numbers as this isn't all that dense.

    CREATE TABLE [Items]
    (
        [id] int IDENTITY(100000,12) NOT NULL primary key,
        [Data] varchar(50) NULL
    )
    
    Zack Peterson : I'll have to think about that. I may need another digit.
    Zack Peterson : I'd also rather that the numbers are in a random order so they don't imply unintended information such as the total number of records.
  • Write a one-time-use program to populate a table of all (or many) possible valid codes in a scrambled order with an integer primary key.

    Code table:

    Id   HumanFriendlyCode
    
    1    100124
    2    991302
    3    201463
    4    157104
    ...  ...
    

    Then just relate the objects table to the rows in that codes table with an auto-incrementing integer foreign key and a unique constraint.

    Thing table:

    Id                                    CodeId  ...
    
    e9d29b14-0ea6-4cfd-a49f-44bcaa7212eb  1       ...
    91906bb7-14ed-4acc-bf23-c4bd1631797f  2       ...
    41ace075-f9f8-46b7-b114-cb17765c4e76  3       ...
    2fba1a58-7a91-4da6-a4a2-7cacef8603db  4       ...
    

    Anyone ever done something like this?

  • This check constraint will enforce a checksum in a database column:

    ALTER TABLE tblCode
    ADD CHECK (
        CAST(SUBSTRING(CAST(Code AS VARCHAR), 6, 1) AS INTEGER) =
        (
            CAST(SUBSTRING(CAST(Code AS VARCHAR), 1, 1) AS INTEGER) +
            CAST(SUBSTRING(CAST(Code AS VARCHAR), 2, 1) AS INTEGER) +
            CAST(SUBSTRING(CAST(Code AS VARCHAR), 3, 1) AS INTEGER) +
            CAST(SUBSTRING(CAST(Code AS VARCHAR), 4, 1) AS INTEGER) +
            CAST(SUBSTRING(CAST(Code AS VARCHAR), 5, 1) AS INTEGER)
        ) % 10
    )
    

    The sixth digit must be modulo 10 of the sum of the first five digits.

  • Your enemy as far as UI goes is code length. I suggest you add alphabetic characters (omitting zero, oh, one, and eye). It will cut then length substantially, and make collisions less likely, especially for transpositions.

catch odbc exception

I need to recognize in my application whether table doesn't exist or has no rows to take appropriate action. Can I catch this two errors separately ?

>>>cursor.execute("delete from TABLE")

Traceback (most recent call last):
  File "<interactive input>", line 1, in <module>
dbi.internal-error: [IBM][CLI Driver][DB2] SQL0100W  No row was found for FETCH, UPDATE or DELETE; or the result of a query is an empty table.  SQLSTATE=02000
 in EXEC

OR

Traceback (most recent call last):
  File "<interactive input>", line 1, in <module>
dbi.program-error: [IBM][CLI Driver][DB2] SQL0204N  "SK77531.TCS_EXCEPTIONS" is an undefined name.  SQLSTATE=42704
 in EXEC
From stackoverflow
  • From the Python documentation:

    A try statement may have more than one except clause, to specify handlers for different exceptions.

    For example:

    try:
        do_something_crazy
    except AttributeError:
        print 'there was an AttributeError'
    except NameError:
        print 'there was a NameError'
    except:
    print 'something else failed miserably'
    

    The last except acts as a catch-all here, and is only executed if an exception different than an AttributeError or NameError occurs.  In production code it's best to steer clear from such catch-all except clauses, because in general you'll want your code to fail whenever an error that you didn't expect occurs.

    In your specific case you'll need to import the different exceptions that can be raised from the dbi module, so you can check for them in different except clauses.

    So something like this:

    # No idea if this is the right import, but they should be somewhere in that module
    import dbi
    
    try:
        cursor.execute("delete from TABLE")
    except dbi.internal-error:
        print 'internal-error'
    except dbi.program-error:
        print 'program-error'
    

    As you'll see in the above-lined documentation page, you can opt to include additional attributed in each except clause.  Doing so will let you access the actual error object, which might be necessary for you at some point when you need to distinguish between two different exceptions of the same class.  Even if you don't need such a fine level of distinction, it's still a good idea to do a bit more checking than I outlined above to make sure you're actually dealing with the error you think you're dealing with.

    All that said and done about try/except, what I'd really recommend is to search for a method in the database library code you're using to check whether a table exist or not before you try to interact with it.  Structured try/excepts are very useful when you're dealing with outside input that needs to be checked and sanitized, but coding defensively around the tentative existence of a database table sounds like something that's going to turn around and bite you later.

    Richard : thank you very much for your response. I tried your last example and I got something like: AttributeError: 'module' object has no attribute 'internal' but i worked with dbi atributes progError and internalError, thanks

webform or winform, how to choose?

What would be the normal way to decide which way to go?

In my case, what if

  • user-base was under 10 persons
  • you have no control over CAS but can install the framework
  • needed to import/export let say excel file/pdf
  • would be intranet
  • security is really important
  • business logic is somehow complex
From stackoverflow
  • Winforms is probably slightly "richer" in that you can do more on the client (since you're running a full fledged application vs. just a browser), e.g. it's a "Rich Client"; on the other hand, the Webforms app doesn't need to be installed on each and every machine since it's a "Thin Client", so that's a bonus.

    As for the business logic, I would separate that out into its own layer/tier anyway, which could be used from either Winforms or Webforms - no deciding factor here, I think.

    So really - which ever you feel more comfortable with is probably fine. You don't really give much really good reasons to choose one over the other....

    Marc

  • Your "cases" either do not distinguish between Webforms and Winforms, or are too vague to be used to make a decision.

    A typical Webforms application connects to a server somewhere on the internet, all data and logic is stored remotely and sent to clients. A typical Winforms application has all data and logic performed on the user's machine. This should be the basis for which you use.

    Will a bunch of people be contributing to your data, with no one user owning it? Is the data not useful unless all contributions are available? Are the contributions unreliable, or should individual users be protected from possibly dangerous contributions? Webforms are the way to go.

    Or is the data the sole responsibility of one person? Does the data need to interoperate with other files, applications, or hardware used by that person? Will each user act separately from everyone else who uses the data, and do you want to isolate each user's data from others? You should use Winforms.

    Yes, you can run a Webforms app locally on one machine, and you can create collaborative internet Winforms apps, but the two technologies were created for different purposes and you will be better off using each for the purpose it was designed for.

  • This seems to be a bit of a blacksheep of a thread, but lets go for it anyways.

    Pretty much in a small distribution, a thick client ( winForms ) is easier to develop for. One of the biggest flaws is the difficult in deployment. CAS will be more of a factor with a thick client app ( as its basically a non-factor on a webform app ), but in the end, with the basics you have described, it shouldn't really become a factor. Then again, this may be a product of not having enough information.

    Where WebForms thrive is the ease of maintenance and the lack of deployment. Additionally, it is much easier to deploy a WebForm app on the Internet should that ever become a priority.

    Your actual task described, could easily be accomplished by either technology, it really all comes down to the specifics. Need a highly or easily distributed app? Go with WebForms. Need to provide easy upgrades? Again, WebForms. Need to provide a rich client experience, then go with WinForms.

    It really all comes down to many specifics you have provided and your particular skillsets, which also arent mentioned. What you have requested ( as far as we know ), can be provided using either technology.

Should I create form objects or generate them from model

I recently found out that I can generate forms from models. I've read the docs and am wondering why someone would opt not to do this in favor creating their own form objects.

From stackoverflow
  • Not all forms are directly related to a Model. While most of the time you might end up using a ModelForm, sometimes you want to create a form for something that is better expressed as an individual form and then you can patch the data together in the view. In my experience this isn't often, but it happens sometimes.

  • Riffing off of Paolo's answer, if you are collecting data from the user to store in your database, you will probably already have a model set up with fields for the information that needs to be collected. In this case, use a ModelForm. On the other hand, sometimes you have scenarios where instead of collecting data from the user, you're using their input to perform some action on data already in the database, perhaps, say, generating a view that's a mashup of information from multiple models. In this case, you will want to use Form. The Form will still allow you to perform validation on the user input comparatively easily.